Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1485633 Details for
Bug 1631789
admin_socket is always set in ceph.conf, including when deploying jewel/kraken/luminous
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
/home/stack/overcloud_install.log
overcloud_install.log (text/plain), 6.42 MB, created by
Filip Hubík
on 2018-09-21 14:55:27 UTC
(
hide
)
Description:
/home/stack/overcloud_install.log
Filename:
MIME Type:
Creator:
Filip Hubík
Created:
2018-09-21 14:55:27 UTC
Size:
6.42 MB
patch
obsolete
>Creating Swift container to store the plan >Creating plan from template files in: /tmp/tripleoclient-rNYDYM/tripleo-heat-templates >Plan created. >Processing templates in the directory /tmp/tripleoclient-rNYDYM/tripleo-heat-templates >WARNING: Following parameter(s) are deprecated and still defined. Deprecated parameters will be removed soon! > OvercloudControlFlavor >WARNING: Following parameter(s) are defined but not used in plan. Could be possible that parameter is valid but currently not used. > DockerOpendaylightConfigImage > DockerDesignateMDNSImage > DockerManilaApiImage > DockerNovaMetadataConfigImage > DockerDesignateConfigImage > DockerIronicConfigImage > DockerDesignateApiImage > DockerEc2ApiImage > DockerOctaviaHealthManagerImage > DockerDesignateSinkImage > DockerOctaviaConfigImage > DockerKeepalivedConfigImage > DockerFluentdConfigImage > DockerIronicNeutronAgentImage > DockerIronicConductorImage > DockerMistralExecutorImage > DockerBarbicanWorkerConfigImage > DockerMysqlClientConfigImage > DockerBarbicanWorkerImage > DockerBarbicanConfigImage > DockerFluentdImage > DockerKeepalivedImage > DockerIronicPxeImage > DockerEtcdImage > DockerZaqarConfigImage > DockerOctaviaApiImage > DockerMistralConfigImage > DockerManilaShareImage > DockerCollectdConfigImage > DockerZaqarImage > DockerEtcdConfigImage > DockerNovaComputeIronicImage > DockerOctaviaHousekeepingImage > DockerSensuConfigImage > DockerOctaviaWorkerImage > DockerIronicInspectorImage > DockerIronicApiImage > DockerMistralApiImage > DockerBarbicanKeystoneListenerImage > DockerSensuClientImage > DockerIronicApiConfigImage > DockerIronicInspectorConfigImage > DockerMistralEventEngineImage > DockerFluentdClientImage > DockerBarbicanKeystoneListenerConfigImage > DockerBarbicanApiImage > DockerMistralEngineImage > DockerDesignateProducerImage > DockerCollectdImage > DockerManilaSchedulerImage > DockerEc2ApiConfigImage > DockerDesignateBackendBIND9Image > DockerDesignateWorkerImage > DockerOpendaylightApiImage > DockerDesignateCentralImage > DockerManilaConfigImage > RootStackName >Deploying templates in the directory /tmp/tripleoclient-rNYDYM/tripleo-heat-templates >Initializing overcloud plan deployment >Creating overcloud Heat stack >2018-09-21 12:03:08Z [overcloud]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:03:11Z [overcloud.Networks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:12Z [overcloud.Networks]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:03:12Z [overcloud.ServiceNetMap]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:12Z [overcloud.Networks.NetworkExtraConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:12Z [overcloud.Networks.NetworkExtraConfig]: CREATE_COMPLETE state changed >2018-09-21 12:03:13Z [overcloud.HorizonSecret]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:13Z [overcloud.HorizonSecret]: CREATE_COMPLETE state changed >2018-09-21 12:03:13Z [overcloud.ServiceNetMap]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:03:13Z [overcloud.ServiceNetMap.ServiceNetMapValue]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:13Z [overcloud.ServiceNetMap.ServiceNetMapValue]: CREATE_COMPLETE state changed >2018-09-21 12:03:13Z [overcloud.Networks.TenantNetwork]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:13Z [overcloud.ServiceNetMap]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:03:14Z [overcloud.MysqlRootPassword]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:14Z [overcloud.MysqlRootPassword]: CREATE_COMPLETE state changed >2018-09-21 12:03:14Z [overcloud.Networks.ExternalNetwork]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:15Z [overcloud.ServiceNetMap]: CREATE_COMPLETE state changed >2018-09-21 12:03:15Z [overcloud.Networks.TenantNetwork]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:03:15Z [overcloud.DeploymentServerBlacklistDict]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:15Z [overcloud.DeploymentServerBlacklistDict]: CREATE_COMPLETE state changed >2018-09-21 12:03:15Z [overcloud.Networks.TenantNetwork.TenantNetwork]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:15Z [overcloud.Networks.StorageNetwork]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:16Z [overcloud.Networks.StorageMgmtNetwork]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:17Z [overcloud.Networks.ExternalNetwork]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:03:17Z [overcloud.Networks.InternalApiNetwork]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:17Z [overcloud.Networks.ExternalNetwork.ExternalNetwork]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:18Z [overcloud.Networks.StorageNetwork]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:03:18Z [overcloud.PcsdPassword]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:18Z [overcloud.PcsdPassword]: CREATE_COMPLETE state changed >2018-09-21 12:03:18Z [overcloud.RabbitCookie]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:18Z [overcloud.Networks.StorageNetwork.StorageNetwork]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:18Z [overcloud.RabbitCookie]: CREATE_COMPLETE state changed >2018-09-21 12:03:18Z [overcloud.Networks.ManagementNetwork]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:18Z [overcloud.Networks.TenantNetwork.TenantNetwork]: CREATE_COMPLETE state changed >2018-09-21 12:03:20Z [overcloud.Networks.StorageMgmtNetwork]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:03:20Z [overcloud.Networks.StorageMgmtNetwork.StorageMgmtNetwork]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:21Z [overcloud.Networks.InternalApiNetwork]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:03:21Z [overcloud.Networks.TenantNetwork.TenantSubnet]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:21Z [overcloud.HeatAuthEncryptionKey]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:21Z [overcloud.HeatAuthEncryptionKey]: CREATE_COMPLETE state changed >2018-09-21 12:03:21Z [overcloud.DefaultPasswords]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:21Z [overcloud.Networks.ExternalNetwork.ExternalNetwork]: CREATE_COMPLETE state changed >2018-09-21 12:03:21Z [overcloud.Networks.StorageNetwork.StorageNetwork]: CREATE_COMPLETE state changed >2018-09-21 12:03:22Z [overcloud.Networks.ExternalNetwork.ExternalSubnet]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:22Z [overcloud.Networks.InternalApiNetwork.InternalApiNetwork]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:22Z [overcloud.Networks.ManagementNetwork]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:03:22Z [overcloud.Networks.StorageNetwork.StorageSubnet]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:23Z [overcloud.Networks.StorageMgmtNetwork.StorageMgmtNetwork]: CREATE_COMPLETE state changed >2018-09-21 12:03:23Z [overcloud.Networks.StorageMgmtNetwork.StorageMgmtSubnet]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:24Z [overcloud.Networks.ManagementNetwork.ManagementNetwork]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:25Z [overcloud.Networks.TenantNetwork.TenantSubnet]: CREATE_COMPLETE state changed >2018-09-21 12:03:25Z [overcloud.Networks.TenantNetwork]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:03:25Z [overcloud.Networks.TenantNetwork]: CREATE_COMPLETE state changed >2018-09-21 12:03:25Z [overcloud.DefaultPasswords]: CREATE_COMPLETE state changed >2018-09-21 12:03:25Z [overcloud.Networks.ExternalNetwork.ExternalSubnet]: CREATE_COMPLETE state changed >2018-09-21 12:03:25Z [overcloud.Networks.ExternalNetwork]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:03:25Z [overcloud.Networks.InternalApiNetwork.InternalApiNetwork]: CREATE_COMPLETE state changed >2018-09-21 12:03:26Z [overcloud.Networks.InternalApiNetwork.InternalApiSubnet]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:26Z [overcloud.Networks.ExternalNetwork]: CREATE_COMPLETE state changed >2018-09-21 12:03:26Z [overcloud.Networks.StorageNetwork.StorageSubnet]: CREATE_COMPLETE state changed >2018-09-21 12:03:26Z [overcloud.Networks.StorageNetwork]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:03:27Z [overcloud.Networks.StorageMgmtNetwork.StorageMgmtSubnet]: CREATE_COMPLETE state changed >2018-09-21 12:03:27Z [overcloud.Networks.StorageMgmtNetwork]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:03:27Z [overcloud.Networks.StorageMgmtNetwork]: CREATE_COMPLETE state changed >2018-09-21 12:03:27Z [overcloud.Networks.ManagementNetwork.ManagementNetwork]: CREATE_COMPLETE state changed >2018-09-21 12:03:27Z [overcloud.Networks.ManagementNetwork.ManagementSubnet]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:27Z [overcloud.Networks.StorageNetwork]: CREATE_COMPLETE state changed >2018-09-21 12:03:28Z [overcloud.Networks.InternalApiNetwork.InternalApiSubnet]: CREATE_COMPLETE state changed >2018-09-21 12:03:28Z [overcloud.Networks.InternalApiNetwork]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:03:29Z [overcloud.Networks.InternalApiNetwork]: CREATE_COMPLETE state changed >2018-09-21 12:03:29Z [overcloud.Networks.ManagementNetwork.ManagementSubnet]: CREATE_COMPLETE state changed >2018-09-21 12:03:29Z [overcloud.Networks.ManagementNetwork]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:03:30Z [overcloud.Networks.ManagementNetwork]: CREATE_COMPLETE state changed >2018-09-21 12:03:30Z [overcloud.Networks]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:03:31Z [overcloud.Networks]: CREATE_COMPLETE state changed >2018-09-21 12:03:40Z [overcloud.ControlVirtualIP]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:43Z [overcloud.ControlVirtualIP]: CREATE_COMPLETE state changed >2018-09-21 12:03:43Z [overcloud.StorageMgmtVirtualIP]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:43Z [overcloud.RedisVirtualIP]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:43Z [overcloud.PublicVirtualIP]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:43Z [overcloud.InternalApiVirtualIP]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:44Z [overcloud.StorageVirtualIP]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:44Z [overcloud.NetCidrMapValue]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:44Z [overcloud.NetCidrMapValue]: CREATE_COMPLETE state changed >2018-09-21 12:03:46Z [overcloud.RedisVirtualIP]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:03:47Z [overcloud.RedisVirtualIP.VipPort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:47Z [overcloud.PublicVirtualIP]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:03:47Z [overcloud.InternalApiVirtualIP]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:03:47Z [overcloud.InternalApiVirtualIP.InternalApiPort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:48Z [overcloud.StorageMgmtVirtualIP]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:03:48Z [overcloud.PublicVirtualIP.ExternalPort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:48Z [overcloud.StorageMgmtVirtualIP.StorageMgmtPort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:49Z [overcloud.StorageVirtualIP]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:03:49Z [overcloud.StorageVirtualIP.StoragePort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:51Z [overcloud.RedisVirtualIP.VipPort]: CREATE_COMPLETE state changed >2018-09-21 12:03:51Z [overcloud.RedisVirtualIP]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:03:52Z [overcloud.RedisVirtualIP]: CREATE_COMPLETE state changed >2018-09-21 12:03:53Z [overcloud.StorageMgmtVirtualIP.StorageMgmtPort]: CREATE_COMPLETE state changed >2018-09-21 12:03:53Z [overcloud.StorageMgmtVirtualIP]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:03:53Z [overcloud.InternalApiVirtualIP.InternalApiPort]: CREATE_COMPLETE state changed >2018-09-21 12:03:53Z [overcloud.InternalApiVirtualIP]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:03:53Z [overcloud.StorageMgmtVirtualIP]: CREATE_COMPLETE state changed >2018-09-21 12:03:53Z [overcloud.InternalApiVirtualIP]: CREATE_COMPLETE state changed >2018-09-21 12:03:53Z [overcloud.PublicVirtualIP.ExternalPort]: CREATE_COMPLETE state changed >2018-09-21 12:03:54Z [overcloud.PublicVirtualIP]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:03:54Z [overcloud.PublicVirtualIP]: CREATE_COMPLETE state changed >2018-09-21 12:03:54Z [overcloud.StorageVirtualIP.StoragePort]: CREATE_COMPLETE state changed >2018-09-21 12:03:54Z [overcloud.StorageVirtualIP]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:03:54Z [overcloud.StorageVirtualIP]: CREATE_COMPLETE state changed >2018-09-21 12:03:57Z [overcloud.VipMap]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:57Z [overcloud.VipMap]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:03:57Z [overcloud.VipMap.NetIpMapValue]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:57Z [overcloud.VipMap.NetIpMapValue]: CREATE_COMPLETE state changed >2018-09-21 12:03:57Z [overcloud.VipMap]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:03:58Z [overcloud.VipMap]: CREATE_COMPLETE state changed >2018-09-21 12:03:59Z [overcloud.VipHosts]: CREATE_IN_PROGRESS state changed >2018-09-21 12:03:59Z [overcloud.VipHosts]: CREATE_COMPLETE state changed >2018-09-21 12:03:59Z [overcloud.EndpointMap]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:01Z [overcloud.EndpointMap]: CREATE_COMPLETE state changed >2018-09-21 12:04:01Z [overcloud.EndpointMapData]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:01Z [overcloud.EndpointMapData]: CREATE_COMPLETE state changed >2018-09-21 12:04:01Z [overcloud.ControllerServiceChain]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:01Z [overcloud.BlockStorageServiceChain]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:01Z [overcloud.ObjectStorageServiceChain]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:01Z [overcloud.CephStorageServiceChain]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:02Z [overcloud.ComputeServiceChain]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:02Z [overcloud.ObjectStorageServiceChain]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:04:03Z [overcloud.ControllerServiceChain]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:04:03Z [overcloud.CephStorageServiceChain]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:04:03Z [overcloud.BlockStorageServiceChain]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:04:04Z [overcloud.ObjectStorageServiceChain.LoggingConfiguration]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:04Z [overcloud.ComputeServiceChain]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:04:05Z [overcloud.ControllerServiceChain.LoggingConfiguration]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:05Z [overcloud.BlockStorageServiceChain.LoggingConfiguration]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:05Z [overcloud.CephStorageServiceChain.ServiceChain]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:05Z [overcloud.ObjectStorageServiceChain.ServiceChain]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:06Z [overcloud.CephStorageServiceChain.LoggingConfiguration]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:06Z [overcloud.ControllerServiceChain.ServiceChain]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:06Z [overcloud.BlockStorageServiceChain.ServiceChain]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:07Z [overcloud.ObjectStorageServiceChain.LoggingConfiguration]: CREATE_COMPLETE state changed >2018-09-21 12:04:07Z [overcloud.ControllerServiceChain.LoggingConfiguration]: CREATE_COMPLETE state changed >2018-09-21 12:04:07Z [overcloud.CephStorageServiceChain.ServiceChain]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:04:08Z [overcloud.ObjectStorageServiceChain.ServiceChain]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:04:08Z [overcloud.CephStorageServiceChain.LoggingConfiguration]: CREATE_COMPLETE state changed >2018-09-21 12:04:09Z [overcloud.CephStorageServiceChain.ServiceChain.15]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:09Z [overcloud.CephStorageServiceChain.ServiceChain.15]: CREATE_COMPLETE state changed >2018-09-21 12:04:10Z [overcloud.ObjectStorageServiceChain.ServiceChain.23]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:11Z [overcloud.ObjectStorageServiceChain.ServiceChain.22]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:11Z [overcloud.CephStorageServiceChain.ServiceChain.12]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:12Z [overcloud.ObjectStorageServiceChain.ServiceChain.15]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:12Z [overcloud.ObjectStorageServiceChain.ServiceChain.23]: CREATE_COMPLETE state changed >2018-09-21 12:04:12Z [overcloud.ObjectStorageServiceChain.ServiceChain.15]: CREATE_COMPLETE state changed >2018-09-21 12:04:13Z [overcloud.ObjectStorageServiceChain.ServiceChain.25]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:13Z [overcloud.ObjectStorageServiceChain.ServiceChain.22]: CREATE_COMPLETE state changed >2018-09-21 12:04:14Z [overcloud.ObjectStorageServiceChain.ServiceChain.21]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:14Z [overcloud.CephStorageServiceChain.ServiceChain.10]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:14Z [overcloud.CephStorageServiceChain.ServiceChain.10]: CREATE_COMPLETE state changed >2018-09-21 12:04:15Z [overcloud.CephStorageServiceChain.ServiceChain.22]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:15Z [overcloud.ObjectStorageServiceChain.ServiceChain.25]: CREATE_COMPLETE state changed >2018-09-21 12:04:16Z [overcloud.ObjectStorageServiceChain.ServiceChain.24]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:17Z [overcloud.ObjectStorageServiceChain.ServiceChain.10]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:17Z [overcloud.ObjectStorageServiceChain.ServiceChain.10]: CREATE_COMPLETE state changed >2018-09-21 12:04:17Z [overcloud.CephStorageServiceChain.ServiceChain.8]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:17Z [overcloud.CephStorageServiceChain.ServiceChain.8]: CREATE_COMPLETE state changed >2018-09-21 12:04:18Z [overcloud.ObjectStorageServiceChain.ServiceChain.11]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:18Z [overcloud.CephStorageServiceChain.ServiceChain.6]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:18Z [overcloud.ObjectStorageServiceChain.ServiceChain.24]: CREATE_COMPLETE state changed >2018-09-21 12:04:19Z [overcloud.ObjectStorageServiceChain.ServiceChain.3]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:19Z [overcloud.BlockStorageServiceChain.LoggingConfiguration]: CREATE_COMPLETE state changed >2018-09-21 12:04:20Z [overcloud.CephStorageServiceChain.ServiceChain.6]: CREATE_COMPLETE state changed >2018-09-21 12:04:20Z [overcloud.ObjectStorageServiceChain.ServiceChain.20]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:20Z [overcloud.CephStorageServiceChain.ServiceChain.25]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:20Z [overcloud.CephStorageServiceChain.ServiceChain.25]: CREATE_COMPLETE state changed >2018-09-21 12:04:20Z [overcloud.ObjectStorageServiceChain.ServiceChain.14]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:20Z [overcloud.CephStorageServiceChain.ServiceChain.23]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:20Z [overcloud.CephStorageServiceChain.ServiceChain.24]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:20Z [overcloud.CephStorageServiceChain.ServiceChain.5]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:20Z [overcloud.CephStorageServiceChain.ServiceChain.4]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:21Z [overcloud.ObjectStorageServiceChain.ServiceChain.3]: CREATE_COMPLETE state changed >2018-09-21 12:04:22Z [overcloud.CephStorageServiceChain.ServiceChain.7]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:22Z [overcloud.CephStorageServiceChain.ServiceChain.7]: CREATE_COMPLETE state changed >2018-09-21 12:04:23Z [overcloud.ObjectStorageServiceChain.ServiceChain.7]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:23Z [overcloud.ObjectStorageServiceChain.ServiceChain.7]: CREATE_COMPLETE state changed >2018-09-21 12:04:23Z [overcloud.CephStorageServiceChain.ServiceChain.2]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:24Z [overcloud.CephStorageServiceChain.ServiceChain.1]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:24Z [overcloud.CephStorageServiceChain.ServiceChain.1]: CREATE_COMPLETE state changed >2018-09-21 12:04:25Z [overcloud.CephStorageServiceChain.ServiceChain.0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:25Z [overcloud.CephStorageServiceChain.ServiceChain.0]: CREATE_COMPLETE state changed >2018-09-21 12:04:25Z [overcloud.ObjectStorageServiceChain.ServiceChain.26]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:25Z [overcloud.CephStorageServiceChain.ServiceChain.2]: CREATE_COMPLETE state changed >2018-09-21 12:04:25Z [overcloud.ObjectStorageServiceChain.ServiceChain.26]: CREATE_COMPLETE state changed >2018-09-21 12:04:26Z [overcloud.ObjectStorageServiceChain.ServiceChain.14]: CREATE_COMPLETE state changed >2018-09-21 12:04:26Z [overcloud.ComputeServiceChain.ServiceChain]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:26Z [overcloud.CephStorageServiceChain.ServiceChain.5]: CREATE_COMPLETE state changed >2018-09-21 12:04:26Z [overcloud.ObjectStorageServiceChain.ServiceChain.19]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:26Z [overcloud.CephStorageServiceChain.ServiceChain.23]: CREATE_COMPLETE state changed >2018-09-21 12:04:26Z [overcloud.CephStorageServiceChain.ServiceChain.24]: CREATE_COMPLETE state changed >2018-09-21 12:04:27Z [overcloud.ObjectStorageServiceChain.ServiceChain.17]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:27Z [overcloud.ComputeServiceChain.LoggingConfiguration]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:27Z [overcloud.ObjectStorageServiceChain.ServiceChain.9]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:27Z [overcloud.ObjectStorageServiceChain.ServiceChain.9]: CREATE_COMPLETE state changed >2018-09-21 12:04:27Z [overcloud.ObjectStorageServiceChain.ServiceChain.5]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:28Z [overcloud.ObjectStorageServiceChain.ServiceChain.18]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:28Z [overcloud.CephStorageServiceChain.ServiceChain.13]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:28Z [overcloud.CephStorageServiceChain.ServiceChain.16]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:28Z [overcloud.CephStorageServiceChain.ServiceChain.16]: CREATE_COMPLETE state changed >2018-09-21 12:04:29Z [overcloud.CephStorageServiceChain.ServiceChain.12]: CREATE_COMPLETE state changed >2018-09-21 12:04:29Z [overcloud.ComputeServiceChain.LoggingConfiguration]: CREATE_COMPLETE state changed >2018-09-21 12:04:29Z [overcloud.CephStorageServiceChain.ServiceChain.3]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:29Z [overcloud.ObjectStorageServiceChain.ServiceChain.17]: CREATE_COMPLETE state changed >2018-09-21 12:04:30Z [overcloud.CephStorageServiceChain.ServiceChain.22]: CREATE_COMPLETE state changed >2018-09-21 12:04:30Z [overcloud.ObjectStorageServiceChain.ServiceChain.11]: CREATE_COMPLETE state changed >2018-09-21 12:04:30Z [overcloud.ObjectStorageServiceChain.ServiceChain.16]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:30Z [overcloud.CephStorageServiceChain.ServiceChain.9]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:31Z [overcloud.ObjectStorageServiceChain.ServiceChain.16]: CREATE_COMPLETE state changed >2018-09-21 12:04:31Z [overcloud.ObjectStorageServiceChain.ServiceChain.19]: CREATE_COMPLETE state changed >2018-09-21 12:04:31Z [overcloud.ObjectStorageServiceChain.ServiceChain.21]: CREATE_COMPLETE state changed >2018-09-21 12:04:31Z [overcloud.CephStorageServiceChain.ServiceChain.17]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:31Z [overcloud.CephStorageServiceChain.ServiceChain.17]: CREATE_COMPLETE state changed >2018-09-21 12:04:32Z [overcloud.CephStorageServiceChain.ServiceChain.4]: CREATE_COMPLETE state changed >2018-09-21 12:04:32Z [overcloud.CephStorageServiceChain.ServiceChain.20]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:32Z [overcloud.ObjectStorageServiceChain.ServiceChain.1]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:32Z [24]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:32Z [overcloud.CephStorageServiceChain.ServiceChain.13]: CREATE_COMPLETE state changed >2018-09-21 12:04:32Z [overcloud.ObjectStorageServiceChain.ServiceChain.5]: CREATE_COMPLETE state changed >2018-09-21 12:04:32Z [overcloud.ObjectStorageServiceChain.ServiceChain.4]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:33Z [overcloud.ObjectStorageServiceChain.ServiceChain.1]: CREATE_COMPLETE state changed >2018-09-21 12:04:33Z [overcloud.CephStorageServiceChain.ServiceChain.9]: CREATE_COMPLETE state changed >2018-09-21 12:04:33Z [3]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:33Z [overcloud.ObjectStorageServiceChain.ServiceChain.20]: CREATE_COMPLETE state changed >2018-09-21 12:04:33Z [overcloud.CephStorageServiceChain.ServiceChain.11]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:33Z [overcloud.ObjectStorageServiceChain.ServiceChain.18]: CREATE_COMPLETE state changed >2018-09-21 12:04:33Z [overcloud.ObjectStorageServiceChain.ServiceChain.0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:33Z [24]: CREATE_COMPLETE state changed >2018-09-21 12:04:33Z [overcloud.ObjectStorageServiceChain.ServiceChain.0]: CREATE_COMPLETE state changed >2018-09-21 12:04:33Z [overcloud.CephStorageServiceChain.ServiceChain.11]: CREATE_COMPLETE state changed >2018-09-21 12:04:34Z [overcloud.ObjectStorageServiceChain.ServiceChain.6]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:34Z [3]: CREATE_COMPLETE state changed >2018-09-21 12:04:34Z [overcloud.ObjectStorageServiceChain.ServiceChain.6]: CREATE_COMPLETE state changed >2018-09-21 12:04:34Z [overcloud.ObjectStorageServiceChain.ServiceChain.4]: CREATE_COMPLETE state changed >2018-09-21 12:04:34Z [overcloud.CephStorageServiceChain.ServiceChain.19]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:35Z [15]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:36Z [overcloud.ObjectStorageServiceChain.ServiceChain.8]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:37Z [overcloud.CephStorageServiceChain.ServiceChain.18]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:37Z [15]: CREATE_COMPLETE state changed >2018-09-21 12:04:37Z [overcloud.CephStorageServiceChain.ServiceChain.18]: CREATE_COMPLETE state changed >2018-09-21 12:04:37Z [overcloud.ObjectStorageServiceChain.ServiceChain.12]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:38Z [overcloud.CephStorageServiceChain.ServiceChain.19]: CREATE_COMPLETE state changed >2018-09-21 12:04:38Z [overcloud.CephStorageServiceChain.ServiceChain.14]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:38Z [13]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:38Z [13]: CREATE_COMPLETE state changed >2018-09-21 12:04:38Z [overcloud.CephStorageServiceChain.ServiceChain.21]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:39Z [overcloud.ObjectStorageServiceChain.ServiceChain.2]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:39Z [overcloud.ObjectStorageServiceChain.ServiceChain.13]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:40Z [overcloud.CephStorageServiceChain.ServiceChain.20]: CREATE_COMPLETE state changed >2018-09-21 12:04:40Z [19]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:41Z [overcloud.ObjectStorageServiceChain.ServiceChain.8]: CREATE_COMPLETE state changed >2018-09-21 12:04:41Z [overcloud.ObjectStorageServiceChain.ServiceChain.12]: CREATE_COMPLETE state changed >2018-09-21 12:04:42Z [19]: CREATE_COMPLETE state changed >2018-09-21 12:04:42Z [overcloud.CephStorageServiceChain.ServiceChain.3]: CREATE_COMPLETE state changed >2018-09-21 12:04:42Z [overcloud.CephStorageServiceChain.ServiceChain.21]: CREATE_COMPLETE state changed >2018-09-21 12:04:42Z [22]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:42Z [14]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:42Z [14]: CREATE_COMPLETE state changed >2018-09-21 12:04:43Z [overcloud.CephStorageServiceChain.ServiceChain.14]: CREATE_COMPLETE state changed >2018-09-21 12:04:44Z [overcloud.ObjectStorageServiceChain.ServiceChain.2]: CREATE_COMPLETE state changed >2018-09-21 12:04:44Z [overcloud.CephStorageServiceChain.ServiceChain]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:04:44Z [25]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:44Z [40]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:44Z [overcloud.CephStorageServiceChain.ServiceChain]: CREATE_COMPLETE state changed >2018-09-21 12:04:46Z [8]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:46Z [8]: CREATE_COMPLETE state changed >2018-09-21 12:04:46Z [6]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:46Z [25]: CREATE_COMPLETE state changed >2018-09-21 12:04:46Z [overcloud.ObjectStorageServiceChain.ServiceChain.13]: CREATE_COMPLETE state changed >2018-09-21 12:04:46Z [overcloud.ObjectStorageServiceChain.ServiceChain]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:04:46Z [6]: CREATE_COMPLETE state changed >2018-09-21 12:04:47Z [7]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:47Z [40]: CREATE_COMPLETE state changed >2018-09-21 12:04:47Z [overcloud.ObjectStorageServiceChain.ServiceChain]: CREATE_COMPLETE state changed >2018-09-21 12:04:47Z [22]: CREATE_COMPLETE state changed >2018-09-21 12:04:47Z [16]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:47Z [overcloud.CephStorageServiceChain.FastForwardUpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:47Z [overcloud.CephStorageServiceChain.FastForwardUpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:04:48Z [overcloud.CephStorageServiceChain.DockerConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:48Z [overcloud.CephStorageServiceChain.HostPrepTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:48Z [overcloud.CephStorageServiceChain.DockerConfig]: CREATE_COMPLETE state changed >2018-09-21 12:04:48Z [overcloud.CephStorageServiceChain.HostPrepTasks]: CREATE_COMPLETE state changed >2018-09-21 12:04:48Z [overcloud.CephStorageServiceChain.PuppetConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:48Z [overcloud.CephStorageServiceChain.ExternalUpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:48Z [overcloud.CephStorageServiceChain.ExternalUpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:04:48Z [overcloud.CephStorageServiceChain.PreUpgradeRollingTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:48Z [overcloud.CephStorageServiceChain.ServiceConfigSettings]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:48Z [23]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:48Z [overcloud.CephStorageServiceChain.ExternalUpdateTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:49Z [overcloud.CephStorageServiceChain.ServiceConfigSettings]: CREATE_COMPLETE state changed >2018-09-21 12:04:49Z [overcloud.CephStorageServiceChain.PuppetConfig]: CREATE_COMPLETE state changed >2018-09-21 12:04:49Z [39]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:49Z [overcloud.CephStorageServiceChain.PreUpgradeRollingTasks]: CREATE_COMPLETE state changed >2018-09-21 12:04:49Z [overcloud.CephStorageServiceChain.ExternalUpdateTasks]: CREATE_COMPLETE state changed >2018-09-21 12:04:49Z [overcloud.CephStorageServiceChain.ExternalDeployTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:49Z [overcloud.CephStorageServiceChain.PuppetStepConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:49Z [overcloud.CephStorageServiceChain.PuppetStepConfig]: CREATE_COMPLETE state changed >2018-09-21 12:04:49Z [overcloud.CephStorageServiceChain.LoggingSourcesConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:49Z [overcloud.CephStorageServiceChain.LoggingSourcesConfig]: CREATE_COMPLETE state changed >2018-09-21 12:04:50Z [11]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:50Z [overcloud.CephStorageServiceChain.ExternalDeployTasks]: CREATE_COMPLETE state changed >2018-09-21 12:04:50Z [7]: CREATE_COMPLETE state changed >2018-09-21 12:04:50Z [71]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:51Z [11]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:51Z [16]: CREATE_COMPLETE state changed >2018-09-21 12:04:51Z [16]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:51Z [28]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:51Z [overcloud.CephStorageServiceChain.CellV2Discovery]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:51Z [overcloud.CephStorageServiceChain.KollaConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:51Z [overcloud.CephStorageServiceChain.KollaConfig]: CREATE_COMPLETE state changed >2018-09-21 12:04:51Z [14]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:51Z [overcloud.CephStorageServiceChain.LoggingGroupsConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:51Z [overcloud.CephStorageServiceChain.LoggingGroupsConfig]: CREATE_COMPLETE state changed >2018-09-21 12:04:51Z [14]: CREATE_COMPLETE state changed >2018-09-21 12:04:52Z [108]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:52Z [23]: CREATE_COMPLETE state changed >2018-09-21 12:04:53Z [36]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:53Z [14]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:54Z [overcloud.CephStorageServiceChain.CellV2Discovery]: CREATE_COMPLETE state changed >2018-09-21 12:04:54Z [overcloud.CephStorageServiceChain.DockerConfigScripts]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:54Z [44]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:54Z [overcloud.CephStorageServiceChain.ServiceServerMetadataHook]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:55Z [overcloud.CephStorageServiceChain.ServiceServerMetadataHook]: CREATE_COMPLETE state changed >2018-09-21 12:04:56Z [13]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:56Z [overcloud.CephStorageServiceChain.ServiceNames]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:56Z [14]: CREATE_COMPLETE state changed >2018-09-21 12:04:56Z [13]: CREATE_COMPLETE state changed >2018-09-21 12:04:56Z [overcloud.CephStorageServiceChain.ServiceNames]: CREATE_COMPLETE state changed >2018-09-21 12:04:57Z [overcloud.CephStorageServiceChain.PostUpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:57Z [25]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:57Z [overcloud.CephStorageServiceChain.PostUpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:04:57Z [overcloud.CephStorageServiceChain.DockerConfigScripts]: CREATE_COMPLETE state changed >2018-09-21 12:04:57Z [44]: CREATE_COMPLETE state changed >2018-09-21 12:04:57Z [39]: CREATE_COMPLETE state changed >2018-09-21 12:04:57Z [27]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:57Z [overcloud.CephStorageServiceChain.MonitoringSubscriptionsConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:57Z [overcloud.CephStorageServiceChain.ExternalPostDeployTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:58Z [overcloud.CephStorageServiceChain.ExternalPostDeployTasks]: CREATE_COMPLETE state changed >2018-09-21 12:04:58Z [4]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:58Z [overcloud.CephStorageServiceChain.UpgradeBatchTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:58Z [overcloud.CephStorageServiceChain.MonitoringSubscriptionsConfig]: CREATE_COMPLETE state changed >2018-09-21 12:04:58Z [overcloud.CephStorageServiceChain.DockerPuppetTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:58Z [27]: CREATE_COMPLETE state changed >2018-09-21 12:04:58Z [overcloud.CephStorageServiceChain.UpgradeBatchTasks]: CREATE_COMPLETE state changed >2018-09-21 12:04:58Z [overcloud.CephStorageServiceChain.UpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:58Z [5]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:58Z [overcloud.ObjectStorageServiceChain.UpgradeBatchTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:58Z [5]: CREATE_COMPLETE state changed >2018-09-21 12:04:58Z [overcloud.ObjectStorageServiceChain.UpgradeBatchTasks]: CREATE_COMPLETE state changed >2018-09-21 12:04:59Z [115]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:59Z [11]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:59Z [37]: CREATE_IN_PROGRESS state changed >2018-09-21 12:04:59Z [overcloud.CephStorageServiceChain.GlobalConfigSettings]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:00Z [61]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:00Z [overcloud.CephStorageServiceChain.DockerPuppetTasks]: CREATE_COMPLETE state changed >2018-09-21 12:05:00Z [overcloud.CephStorageServiceChain.UpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:05:00Z [overcloud.ObjectStorageServiceChain.LoggingSourcesConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:00Z [overcloud.CephStorageServiceChain.UpdateTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:01Z [overcloud.ObjectStorageServiceChain.LoggingSourcesConfig]: CREATE_COMPLETE state changed >2018-09-21 12:05:01Z [overcloud.CephStorageServiceChain.UpdateTasks]: CREATE_COMPLETE state changed >2018-09-21 12:05:01Z [overcloud.CephStorageServiceChain.DeployStepsTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:01Z [6]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:01Z [overcloud.CephStorageServiceChain.FastForwardPostUpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:01Z [overcloud.CephStorageServiceChain.FastForwardPostUpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:05:01Z [6]: CREATE_COMPLETE state changed >2018-09-21 12:05:02Z [overcloud.CephStorageServiceChain.GlobalConfigSettings]: CREATE_COMPLETE state changed >2018-09-21 12:05:03Z [21]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:03Z [11]: CREATE_COMPLETE state changed >2018-09-21 12:05:03Z [17]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:03Z [overcloud.CephStorageServiceChain.WorkflowTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:03Z [overcloud.ObjectStorageServiceChain.DeployStepsTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:03Z [overcloud.ObjectStorageServiceChain.DeployStepsTasks]: CREATE_COMPLETE state changed >2018-09-21 12:05:03Z [overcloud.ObjectStorageServiceChain.HostPrepTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:03Z [21]: CREATE_COMPLETE state changed >2018-09-21 12:05:03Z [overcloud.CephStorageServiceChain.WorkflowTasks]: CREATE_COMPLETE state changed >2018-09-21 12:05:03Z [overcloud.ObjectStorageServiceChain.PuppetStepConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:03Z [overcloud.ObjectStorageServiceChain.HostPrepTasks]: CREATE_COMPLETE state changed >2018-09-21 12:05:04Z [overcloud.CephStorageServiceChain.DeployStepsTasks]: CREATE_COMPLETE state changed >2018-09-21 12:05:04Z [overcloud.ObjectStorageServiceChain.UpdateTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:04Z [11]: CREATE_COMPLETE state changed >2018-09-21 12:05:04Z [9]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:04Z [4]: CREATE_COMPLETE state changed >2018-09-21 12:05:04Z [overcloud.CephStorageServiceChain.PostUpdateTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:04Z [overcloud.ObjectStorageServiceChain.PreUpgradeRollingTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:04Z [15]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:04Z [28]: CREATE_COMPLETE state changed >2018-09-21 12:05:04Z [overcloud.ObjectStorageServiceChain.PreUpgradeRollingTasks]: CREATE_COMPLETE state changed >2018-09-21 12:05:04Z [9]: CREATE_COMPLETE state changed >2018-09-21 12:05:04Z [overcloud.ObjectStorageServiceChain.UpdateTasks]: CREATE_COMPLETE state changed >2018-09-21 12:05:04Z [overcloud.CephStorageServiceChain.PostUpdateTasks]: CREATE_COMPLETE state changed >2018-09-21 12:05:05Z [16]: CREATE_COMPLETE state changed >2018-09-21 12:05:06Z [0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:06Z [0]: CREATE_COMPLETE state changed >2018-09-21 12:05:06Z [overcloud.ObjectStorageServiceChain.ServiceNames]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:06Z [overcloud.ObjectStorageServiceChain.ServiceNames]: CREATE_COMPLETE state changed >2018-09-21 12:05:07Z [37]: CREATE_COMPLETE state changed >2018-09-21 12:05:07Z [115]: CREATE_COMPLETE state changed >2018-09-21 12:05:07Z [overcloud.ObjectStorageServiceChain.PostUpdateTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:07Z [151]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:07Z [overcloud.ObjectStorageServiceChain.PuppetStepConfig]: CREATE_COMPLETE state changed >2018-09-21 12:05:07Z [overcloud.ObjectStorageServiceChain.PostUpdateTasks]: CREATE_COMPLETE state changed >2018-09-21 12:05:07Z [151]: CREATE_COMPLETE state changed >2018-09-21 12:05:07Z [overcloud.ObjectStorageServiceChain.DockerPuppetTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:07Z [20]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:07Z [8]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:07Z [38]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:07Z [144]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:07Z [20]: CREATE_COMPLETE state changed >2018-09-21 12:05:08Z [overcloud.CephStorageServiceChain]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:05:08Z [overcloud.ObjectStorageServiceChain.MonitoringSubscriptionsConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:09Z [overcloud.ObjectStorageServiceChain.MonitoringSubscriptionsConfig]: CREATE_COMPLETE state changed >2018-09-21 12:05:09Z [overcloud.CephStorageServiceChain]: CREATE_COMPLETE state changed >2018-09-21 12:05:09Z [1]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:09Z [25]: CREATE_COMPLETE state changed >2018-09-21 12:05:09Z [36]: CREATE_COMPLETE state changed >2018-09-21 12:05:09Z [10]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:10Z [10]: CREATE_COMPLETE state changed >2018-09-21 12:05:10Z [overcloud.ObjectStorageServiceChain.DockerPuppetTasks]: CREATE_COMPLETE state changed >2018-09-21 12:05:10Z [100]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:10Z [100]: CREATE_COMPLETE state changed >2018-09-21 12:05:10Z [overcloud.ObjectStorageServiceChain.ServiceConfigSettings]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:11Z [overcloud.ObjectStorageServiceChain.DockerConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:11Z [overcloud.ObjectStorageServiceChain.KollaConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:11Z [150]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:11Z [overcloud.ObjectStorageServiceChain.KollaConfig]: CREATE_COMPLETE state changed >2018-09-21 12:05:11Z [overcloud.ObjectStorageServiceChain.DockerConfig]: CREATE_COMPLETE state changed >2018-09-21 12:05:11Z [overcloud.ObjectStorageServiceChain.PostUpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:11Z [overcloud.ObjectStorageServiceChain.PostUpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:05:12Z [overcloud.ObjectStorageServiceChain.ServiceConfigSettings]: CREATE_COMPLETE state changed >2018-09-21 12:05:12Z [overcloud.ObjectStorageServiceChain.ServiceServerMetadataHook]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:12Z [overcloud.ObjectStorageServiceChain.WorkflowTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:12Z [overcloud.ObjectStorageServiceChain.WorkflowTasks]: CREATE_COMPLETE state changed >2018-09-21 12:05:12Z [overcloud.ObjectStorageServiceChain.ServiceServerMetadataHook]: CREATE_COMPLETE state changed >2018-09-21 12:05:12Z [overcloud.ObjectStorageServiceChain.CellV2Discovery]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:12Z [overcloud.ObjectStorageServiceChain.ExternalPostDeployTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:12Z [overcloud.ObjectStorageServiceChain.CellV2Discovery]: CREATE_COMPLETE state changed >2018-09-21 12:05:12Z [overcloud.ObjectStorageServiceChain.ExternalPostDeployTasks]: CREATE_COMPLETE state changed >2018-09-21 12:05:12Z [overcloud.ObjectStorageServiceChain.PuppetConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:13Z [overcloud.ObjectStorageServiceChain.GlobalConfigSettings]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:13Z [1]: CREATE_COMPLETE state changed >2018-09-21 12:05:13Z [11]: CREATE_COMPLETE state changed >2018-09-21 12:05:13Z [overcloud.ObjectStorageServiceChain.PuppetConfig]: CREATE_COMPLETE state changed >2018-09-21 12:05:13Z [overcloud.ObjectStorageServiceChain.GlobalConfigSettings]: CREATE_COMPLETE state changed >2018-09-21 12:05:13Z [1]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:13Z [38]: CREATE_COMPLETE state changed >2018-09-21 12:05:13Z [overcloud.ObjectStorageServiceChain.FastForwardUpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:13Z [overcloud.ObjectStorageServiceChain.LoggingGroupsConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:13Z [overcloud.ObjectStorageServiceChain.ExternalUpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:13Z [overcloud.ObjectStorageServiceChain.DockerConfigScripts]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:13Z [overcloud.ObjectStorageServiceChain.FastForwardPostUpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:13Z [1]: CREATE_COMPLETE state changed >2018-09-21 12:05:13Z [overcloud.ObjectStorageServiceChain.FastForwardUpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:05:13Z [overcloud.CephStorageServiceChainRoleData]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:13Z [26]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:13Z [overcloud.CephStorageServiceChainRoleData]: CREATE_COMPLETE state changed >2018-09-21 12:05:13Z [overcloud.ObjectStorageServiceChain.LoggingGroupsConfig]: CREATE_COMPLETE state changed >2018-09-21 12:05:13Z [overcloud.ObjectStorageServiceChain.ExternalUpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:05:14Z [150]: CREATE_COMPLETE state changed >2018-09-21 12:05:14Z [overcloud.ObjectStorageServiceChain.ExternalDeployTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:14Z [overcloud.ObjectStorageServiceChain.UpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:14Z [overcloud.ObjectStorageServiceChain.DockerConfigScripts]: CREATE_COMPLETE state changed >2018-09-21 12:05:14Z [overcloud.ObjectStorageServiceChain.ExternalUpdateTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:14Z [112]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:14Z [21]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:14Z [overcloud.ObjectStorageServiceChain.ExternalDeployTasks]: CREATE_COMPLETE state changed >2018-09-21 12:05:14Z [overcloud.ObjectStorageServiceChain.UpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:05:15Z [overcloud.ObjectStorageServiceChain.ExternalUpdateTasks]: CREATE_COMPLETE state changed >2018-09-21 12:05:15Z [42]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:15Z [overcloud.ObjectStorageServiceChain.FastForwardPostUpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:05:15Z [141]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:15Z [17]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:16Z [17]: CREATE_COMPLETE state changed >2018-09-21 12:05:16Z [41]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:16Z [147]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:16Z [41]: CREATE_COMPLETE state changed >2018-09-21 12:05:18Z [61]: CREATE_COMPLETE state changed >2018-09-21 12:05:18Z [42]: CREATE_COMPLETE state changed >2018-09-21 12:05:18Z [overcloud.ObjectStorageServiceChain]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:05:18Z [12]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:18Z [9]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:18Z [18]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:18Z [26]: CREATE_COMPLETE state changed >2018-09-21 12:05:18Z [2]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:19Z [18]: CREATE_COMPLETE state changed >2018-09-21 12:05:19Z [overcloud.CephStorageServiceNames]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:19Z [overcloud.CephStorageServiceNames]: CREATE_COMPLETE state changed >2018-09-21 12:05:19Z [21]: CREATE_COMPLETE state changed >2018-09-21 12:05:20Z [71]: CREATE_COMPLETE state changed >2018-09-21 12:05:20Z [15]: CREATE_COMPLETE state changed >2018-09-21 12:05:20Z [9]: CREATE_COMPLETE state changed >2018-09-21 12:05:20Z [31]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:20Z [31]: CREATE_COMPLETE state changed >2018-09-21 12:05:20Z [12]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:20Z [12]: CREATE_COMPLETE state changed >2018-09-21 12:05:21Z [overcloud.ObjectStorageServiceChain]: CREATE_COMPLETE state changed >2018-09-21 12:05:21Z [45]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:21Z [51]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:21Z [117]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:21Z [10]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:21Z [147]: CREATE_COMPLETE state changed >2018-09-21 12:05:21Z [43]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:21Z [43]: CREATE_COMPLETE state changed >2018-09-21 12:05:22Z [117]: CREATE_COMPLETE state changed >2018-09-21 12:05:22Z [45]: CREATE_COMPLETE state changed >2018-09-21 12:05:22Z [140]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:22Z [17]: CREATE_COMPLETE state changed >2018-09-21 12:05:24Z [108]: CREATE_COMPLETE state changed >2018-09-21 12:05:24Z [81]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:24Z [81]: CREATE_COMPLETE state changed >2018-09-21 12:05:24Z [29]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:24Z [29]: CREATE_COMPLETE state changed >2018-09-21 12:05:25Z [145]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:25Z [8]: CREATE_COMPLETE state changed >2018-09-21 12:05:25Z [overcloud.ObjectStorageServiceChainRoleData]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:25Z [19]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:25Z [overcloud.ObjectStorageServiceChainRoleData]: CREATE_COMPLETE state changed >2018-09-21 12:05:25Z [33]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:25Z [145]: CREATE_COMPLETE state changed >2018-09-21 12:05:26Z [33]: CREATE_COMPLETE state changed >2018-09-21 12:05:26Z [overcloud.ObjectStorageServiceNames]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:26Z [12]: CREATE_COMPLETE state changed >2018-09-21 12:05:26Z [102]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:27Z [18]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:27Z [140]: CREATE_COMPLETE state changed >2018-09-21 12:05:27Z [18]: CREATE_COMPLETE state changed >2018-09-21 12:05:28Z [46]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:28Z [overcloud.ObjectStorageServiceNames]: CREATE_COMPLETE state changed >2018-09-21 12:05:28Z [4]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:28Z [19]: CREATE_COMPLETE state changed >2018-09-21 12:05:29Z [122]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:29Z [122]: CREATE_COMPLETE state changed >2018-09-21 12:05:32Z [4]: CREATE_COMPLETE state changed >2018-09-21 12:05:32Z [77]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:32Z [144]: CREATE_COMPLETE state changed >2018-09-21 12:05:33Z [30]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:33Z [2]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:33Z [30]: CREATE_COMPLETE state changed >2018-09-21 12:05:33Z [18]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:34Z [77]: CREATE_COMPLETE state changed >2018-09-21 12:05:34Z [46]: CREATE_COMPLETE state changed >2018-09-21 12:05:35Z [36]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:38Z [27]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:38Z [18]: CREATE_COMPLETE state changed >2018-09-21 12:05:38Z [3]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:38Z [52]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:38Z [26]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:39Z [83]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:39Z [83]: CREATE_COMPLETE state changed >2018-09-21 12:05:39Z [2]: CREATE_COMPLETE state changed >2018-09-21 12:05:41Z [51]: CREATE_COMPLETE state changed >2018-09-21 12:05:41Z [24]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:41Z [10]: CREATE_COMPLETE state changed >2018-09-21 12:05:41Z [52]: CREATE_COMPLETE state changed >2018-09-21 12:05:42Z [112]: CREATE_COMPLETE state changed >2018-09-21 12:05:42Z [5]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:43Z [34]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:43Z [34]: CREATE_COMPLETE state changed >2018-09-21 12:05:44Z [35]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:44Z [5]: CREATE_COMPLETE state changed >2018-09-21 12:05:45Z [21]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:45Z [86]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:46Z [7]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:46Z [129]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:46Z [102]: CREATE_COMPLETE state changed >2018-09-21 12:05:47Z [132]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:47Z [35]: CREATE_COMPLETE state changed >2018-09-21 12:05:47Z [16]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:47Z [86]: CREATE_COMPLETE state changed >2018-09-21 12:05:47Z [23]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:48Z [65]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:48Z [27]: CREATE_COMPLETE state changed >2018-09-21 12:05:48Z [44]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:48Z [65]: CREATE_COMPLETE state changed >2018-09-21 12:05:49Z [32]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:49Z [0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:49Z [0]: CREATE_COMPLETE state changed >2018-09-21 12:05:50Z [9]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:50Z [132]: CREATE_COMPLETE state changed >2018-09-21 12:05:50Z [3]: CREATE_COMPLETE state changed >2018-09-21 12:05:50Z [141]: CREATE_COMPLETE state changed >2018-09-21 12:05:51Z [17]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:52Z [20]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:52Z [9]: CREATE_COMPLETE state changed >2018-09-21 12:05:52Z [20]: CREATE_COMPLETE state changed >2018-09-21 12:05:52Z [2]: CREATE_COMPLETE state changed >2018-09-21 12:05:52Z [17]: CREATE_COMPLETE state changed >2018-09-21 12:05:53Z [44]: CREATE_COMPLETE state changed >2018-09-21 12:05:53Z [22]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:53Z [overcloud.BlockStorageServiceChain.ServiceChain]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:05:53Z [22]: CREATE_COMPLETE state changed >2018-09-21 12:05:53Z [153]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:53Z [10]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:53Z [10]: CREATE_COMPLETE state changed >2018-09-21 12:05:53Z [64]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:54Z [32]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:54Z [64]: CREATE_COMPLETE state changed >2018-09-21 12:05:54Z [32]: CREATE_COMPLETE state changed >2018-09-21 12:05:54Z [overcloud.BlockStorageServiceChain.ServiceChain]: CREATE_COMPLETE state changed >2018-09-21 12:05:55Z [93]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:56Z [72]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:56Z [72]: CREATE_COMPLETE state changed >2018-09-21 12:05:57Z [93]: CREATE_COMPLETE state changed >2018-09-21 12:05:58Z [36]: CREATE_COMPLETE state changed >2018-09-21 12:05:58Z [104]: CREATE_IN_PROGRESS state changed >2018-09-21 12:05:58Z [153]: CREATE_COMPLETE state changed >2018-09-21 12:06:00Z [49]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:00Z [136]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:01Z [136]: CREATE_COMPLETE state changed >2018-09-21 12:06:02Z [104]: CREATE_COMPLETE state changed >2018-09-21 12:06:02Z [76]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:02Z [76]: CREATE_COMPLETE state changed >2018-09-21 12:06:02Z [26]: CREATE_COMPLETE state changed >2018-09-21 12:06:04Z [49]: CREATE_COMPLETE state changed >2018-09-21 12:06:04Z [24]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:04Z [60]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:04Z [24]: CREATE_COMPLETE state changed >2018-09-21 12:06:05Z [107]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:05Z [overcloud.BlockStorageServiceChain.PreUpgradeRollingTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:06Z [overcloud.BlockStorageServiceChain.PreUpgradeRollingTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:06Z [overcloud.BlockStorageServiceChain.PuppetStepConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:06Z [overcloud.BlockStorageServiceChain.PuppetStepConfig]: CREATE_COMPLETE state changed >2018-09-21 12:06:06Z [40]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:06Z [overcloud.BlockStorageServiceChain.PostUpdateTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:06Z [overcloud.BlockStorageServiceChain.PostUpdateTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:06Z [40]: CREATE_COMPLETE state changed >2018-09-21 12:06:07Z [129]: CREATE_COMPLETE state changed >2018-09-21 12:06:07Z [109]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:07Z [overcloud.BlockStorageServiceChain.ServiceServerMetadataHook]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:07Z [overcloud.BlockStorageServiceChain.ServiceServerMetadataHook]: CREATE_COMPLETE state changed >2018-09-21 12:06:07Z [overcloud.BlockStorageServiceChain.DockerConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:08Z [overcloud.BlockStorageServiceChain.UpgradeBatchTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:08Z [overcloud.BlockStorageServiceChain.LoggingSourcesConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:08Z [overcloud.BlockStorageServiceChain.DeployStepsTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:08Z [overcloud.BlockStorageServiceChain.LoggingSourcesConfig]: CREATE_COMPLETE state changed >2018-09-21 12:06:08Z [overcloud.BlockStorageServiceChain.UpgradeBatchTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:08Z [overcloud.BlockStorageServiceChain.DeployStepsTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:08Z [16]: CREATE_COMPLETE state changed >2018-09-21 12:06:08Z [overcloud.BlockStorageServiceChain.GlobalConfigSettings]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:09Z [125]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:09Z [overcloud.BlockStorageServiceChain.CellV2Discovery]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:09Z [overcloud.BlockStorageServiceChain.LoggingGroupsConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:09Z [overcloud.BlockStorageServiceChain.GlobalConfigSettings]: CREATE_COMPLETE state changed >2018-09-21 12:06:09Z [overcloud.BlockStorageServiceChain.DockerConfig]: CREATE_COMPLETE state changed >2018-09-21 12:06:09Z [overcloud.BlockStorageServiceChain.ServiceConfigSettings]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:10Z [43]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:10Z [overcloud.BlockStorageServiceChain.FastForwardUpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:10Z [overcloud.BlockStorageServiceChain.ExternalUpdateTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:10Z [43]: CREATE_COMPLETE state changed >2018-09-21 12:06:10Z [overcloud.BlockStorageServiceChain.ExternalUpdateTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:10Z [overcloud.BlockStorageServiceChain.FastForwardUpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:11Z [overcloud.BlockStorageServiceChain.ServiceConfigSettings]: CREATE_COMPLETE state changed >2018-09-21 12:06:11Z [110]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:11Z [125]: CREATE_COMPLETE state changed >2018-09-21 12:06:11Z [7]: CREATE_COMPLETE state changed >2018-09-21 12:06:11Z [overcloud.BlockStorageServiceChain.LoggingGroupsConfig]: CREATE_COMPLETE state changed >2018-09-21 12:06:11Z [overcloud.BlockStorageServiceChain.CellV2Discovery]: CREATE_COMPLETE state changed >2018-09-21 12:06:11Z [110]: CREATE_COMPLETE state changed >2018-09-21 12:06:11Z [overcloud.BlockStorageServiceChain.PuppetConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:11Z [overcloud.BlockStorageServiceChain.PuppetConfig]: CREATE_COMPLETE state changed >2018-09-21 12:06:11Z [overcloud.BlockStorageServiceChain.MonitoringSubscriptionsConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:11Z [overcloud.BlockStorageServiceChain.DockerConfigScripts]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:12Z [overcloud.BlockStorageServiceChain.FastForwardPostUpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:12Z [overcloud.BlockStorageServiceChain.KollaConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:12Z [overcloud.BlockStorageServiceChain.DockerConfigScripts]: CREATE_COMPLETE state changed >2018-09-21 12:06:12Z [overcloud.BlockStorageServiceChain.MonitoringSubscriptionsConfig]: CREATE_COMPLETE state changed >2018-09-21 12:06:12Z [24]: CREATE_COMPLETE state changed >2018-09-21 12:06:12Z [overcloud.BlockStorageServiceChain.UpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:13Z [overcloud.BlockStorageServiceChain.UpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:13Z [overcloud.BlockStorageServiceChain.DockerPuppetTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:13Z [overcloud.BlockStorageServiceChain.DockerPuppetTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:13Z [overcloud.BlockStorageServiceChain.HostPrepTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:13Z [overcloud.BlockStorageServiceChain.FastForwardPostUpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:13Z [overcloud.BlockStorageServiceChain.KollaConfig]: CREATE_COMPLETE state changed >2018-09-21 12:06:13Z [overcloud.BlockStorageServiceChain.HostPrepTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:13Z [overcloud.BlockStorageServiceChain.ExternalPostDeployTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:13Z [overcloud.BlockStorageServiceChain.ExternalPostDeployTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:13Z [overcloud.BlockStorageServiceChain.PostUpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:13Z [overcloud.BlockStorageServiceChain.PostUpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:14Z [overcloud.BlockStorageServiceChain.ServiceNames]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:14Z [30]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:14Z [overcloud.BlockStorageServiceChain.UpdateTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:14Z [overcloud.BlockStorageServiceChain.WorkflowTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:14Z [overcloud.BlockStorageServiceChain.ServiceNames]: CREATE_COMPLETE state changed >2018-09-21 12:06:14Z [overcloud.BlockStorageServiceChain.ExternalUpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:14Z [overcloud.BlockStorageServiceChain.ExternalDeployTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:14Z [overcloud.BlockStorageServiceChain.ExternalDeployTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:15Z [152]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:16Z [152]: CREATE_COMPLETE state changed >2018-09-21 12:06:16Z [23]: CREATE_COMPLETE state changed >2018-09-21 12:06:16Z [overcloud.ComputeServiceChain.ServiceChain]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:06:16Z [overcloud.BlockStorageServiceChain.UpdateTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:16Z [30]: CREATE_COMPLETE state changed >2018-09-21 12:06:16Z [overcloud.BlockStorageServiceChain.WorkflowTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:16Z [32]: CREATE_COMPLETE state changed >2018-09-21 12:06:16Z [overcloud.BlockStorageServiceChain.ExternalUpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:17Z [21]: CREATE_COMPLETE state changed >2018-09-21 12:06:17Z [overcloud.BlockStorageServiceChain]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:06:17Z [92]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:17Z [overcloud.ComputeServiceChain.ServiceChain]: CREATE_COMPLETE state changed >2018-09-21 12:06:17Z [92]: CREATE_COMPLETE state changed >2018-09-21 12:06:18Z [overcloud.BlockStorageServiceChain]: CREATE_COMPLETE state changed >2018-09-21 12:06:19Z [25]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:19Z [25]: CREATE_COMPLETE state changed >2018-09-21 12:06:19Z [overcloud.BlockStorageServiceChainRoleData]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:20Z [overcloud.BlockStorageServiceChainRoleData]: CREATE_COMPLETE state changed >2018-09-21 12:06:20Z [120]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:20Z [overcloud.BlockStorageServiceNames]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:20Z [overcloud.BlockStorageServiceNames]: CREATE_COMPLETE state changed >2018-09-21 12:06:20Z [120]: CREATE_COMPLETE state changed >2018-09-21 12:06:20Z [60]: CREATE_COMPLETE state changed >2018-09-21 12:06:21Z [128]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:22Z [135]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:22Z [107]: CREATE_COMPLETE state changed >2018-09-21 12:06:22Z [135]: CREATE_COMPLETE state changed >2018-09-21 12:06:23Z [15]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:23Z [15]: CREATE_COMPLETE state changed >2018-09-21 12:06:24Z [89]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:25Z [109]: CREATE_COMPLETE state changed >2018-09-21 12:06:26Z [126]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:27Z [29]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:27Z [29]: CREATE_COMPLETE state changed >2018-09-21 12:06:27Z [128]: CREATE_COMPLETE state changed >2018-09-21 12:06:28Z [126]: CREATE_COMPLETE state changed >2018-09-21 12:06:28Z [41]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:28Z [41]: CREATE_COMPLETE state changed >2018-09-21 12:06:29Z [119]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:29Z [119]: CREATE_COMPLETE state changed >2018-09-21 12:06:30Z [105]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:31Z [39]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:32Z [146]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:32Z [89]: CREATE_COMPLETE state changed >2018-09-21 12:06:33Z [27]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:33Z [27]: CREATE_COMPLETE state changed >2018-09-21 12:06:34Z [39]: CREATE_COMPLETE state changed >2018-09-21 12:06:34Z [103]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:34Z [103]: CREATE_COMPLETE state changed >2018-09-21 12:06:35Z [121]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:35Z [146]: CREATE_COMPLETE state changed >2018-09-21 12:06:35Z [121]: CREATE_COMPLETE state changed >2018-09-21 12:06:36Z [53]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:37Z [78]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:37Z [78]: CREATE_COMPLETE state changed >2018-09-21 12:06:38Z [142]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:39Z [142]: CREATE_COMPLETE state changed >2018-09-21 12:06:40Z [88]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:41Z [57]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:42Z [88]: CREATE_COMPLETE state changed >2018-09-21 12:06:43Z [75]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:43Z [75]: CREATE_COMPLETE state changed >2018-09-21 12:06:44Z [114]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:45Z [133]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:47Z [87]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:47Z [105]: CREATE_COMPLETE state changed >2018-09-21 12:06:47Z [63]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:47Z [63]: CREATE_COMPLETE state changed >2018-09-21 12:06:47Z [87]: CREATE_COMPLETE state changed >2018-09-21 12:06:49Z [73]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:49Z [53]: CREATE_COMPLETE state changed >2018-09-21 12:06:49Z [73]: CREATE_COMPLETE state changed >2018-09-21 12:06:50Z [overcloud.ComputeServiceChain.PuppetConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:50Z [overcloud.ComputeServiceChain.HostPrepTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:50Z [overcloud.ComputeServiceChain.HostPrepTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:50Z [overcloud.ComputeServiceChain.PuppetConfig]: CREATE_COMPLETE state changed >2018-09-21 12:06:50Z [overcloud.ComputeServiceChain.UpgradeBatchTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:50Z [overcloud.ComputeServiceChain.UpgradeBatchTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:50Z [overcloud.ComputeServiceChain.UpdateTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:51Z [139]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:51Z [overcloud.ComputeServiceChain.ExternalDeployTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:51Z [overcloud.ComputeServiceChain.ExternalPostDeployTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:51Z [overcloud.ComputeServiceChain.UpdateTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:51Z [overcloud.ComputeServiceChain.FastForwardUpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:51Z [overcloud.ComputeServiceChain.CellV2Discovery]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:51Z [overcloud.ComputeServiceChain.ExternalPostDeployTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:51Z [overcloud.ComputeServiceChain.DockerPuppetTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:51Z [overcloud.ComputeServiceChain.ExternalDeployTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:51Z [overcloud.ComputeServiceChain.CellV2Discovery]: CREATE_COMPLETE state changed >2018-09-21 12:06:51Z [overcloud.ComputeServiceChain.GlobalConfigSettings]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:51Z [overcloud.ComputeServiceChain.DockerPuppetTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:52Z [overcloud.ComputeServiceChain.DockerConfigScripts]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:52Z [overcloud.ComputeServiceChain.DockerConfigScripts]: CREATE_COMPLETE state changed >2018-09-21 12:06:52Z [overcloud.ComputeServiceChain.FastForwardUpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:52Z [overcloud.ComputeServiceChain.ServiceConfigSettings]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:52Z [111]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:52Z [overcloud.ComputeServiceChain.DeployStepsTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:52Z [overcloud.ComputeServiceChain.FastForwardPostUpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:52Z [overcloud.ComputeServiceChain.LoggingGroupsConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:52Z [overcloud.ComputeServiceChain.ServiceNames]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:53Z [overcloud.ComputeServiceChain.FastForwardPostUpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:53Z [overcloud.ComputeServiceChain.ServiceNames]: CREATE_COMPLETE state changed >2018-09-21 12:06:53Z [overcloud.ComputeServiceChain.DeployStepsTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:53Z [overcloud.ComputeServiceChain.ServiceConfigSettings]: CREATE_COMPLETE state changed >2018-09-21 12:06:53Z [overcloud.ComputeServiceChain.LoggingGroupsConfig]: CREATE_COMPLETE state changed >2018-09-21 12:06:53Z [overcloud.ComputeServiceChain.GlobalConfigSettings]: CREATE_COMPLETE state changed >2018-09-21 12:06:53Z [66]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:53Z [overcloud.ComputeServiceChain.MonitoringSubscriptionsConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:54Z [overcloud.ComputeServiceChain.MonitoringSubscriptionsConfig]: CREATE_COMPLETE state changed >2018-09-21 12:06:54Z [overcloud.ComputeServiceChain.PostUpdateTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:54Z [overcloud.ComputeServiceChain.LoggingSourcesConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:54Z [66]: CREATE_COMPLETE state changed >2018-09-21 12:06:55Z [131]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:55Z [overcloud.ComputeServiceChain.DockerConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:55Z [131]: CREATE_COMPLETE state changed >2018-09-21 12:06:55Z [overcloud.ComputeServiceChain.PostUpdateTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:55Z [overcloud.ComputeServiceChain.DockerConfig]: CREATE_COMPLETE state changed >2018-09-21 12:06:55Z [overcloud.ComputeServiceChain.LoggingSourcesConfig]: CREATE_COMPLETE state changed >2018-09-21 12:06:55Z [overcloud.ComputeServiceChain.KollaConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:55Z [overcloud.ComputeServiceChain.ExternalUpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:55Z [overcloud.ComputeServiceChain.KollaConfig]: CREATE_COMPLETE state changed >2018-09-21 12:06:55Z [overcloud.ComputeServiceChain.PuppetStepConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:56Z [overcloud.ComputeServiceChain.PuppetStepConfig]: CREATE_COMPLETE state changed >2018-09-21 12:06:56Z [overcloud.ComputeServiceChain.ExternalUpdateTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:56Z [overcloud.ComputeServiceChain.ExternalUpdateTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:56Z [overcloud.ComputeServiceChain.UpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:56Z [overcloud.ComputeServiceChain.ServiceServerMetadataHook]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:56Z [113]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:56Z [overcloud.ComputeServiceChain.WorkflowTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:56Z [overcloud.ComputeServiceChain.PreUpgradeRollingTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:56Z [overcloud.ComputeServiceChain.UpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:56Z [57]: CREATE_COMPLETE state changed >2018-09-21 12:06:56Z [overcloud.ComputeServiceChain.PreUpgradeRollingTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:56Z [overcloud.ComputeServiceChain.WorkflowTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:56Z [overcloud.ComputeServiceChain.ServiceServerMetadataHook]: CREATE_COMPLETE state changed >2018-09-21 12:06:56Z [139]: CREATE_COMPLETE state changed >2018-09-21 12:06:56Z [overcloud.ComputeServiceChain.ExternalUpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:56Z [overcloud.ComputeServiceChain.PostUpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:57Z [overcloud.ComputeServiceChain.PostUpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:06:58Z [91]: CREATE_IN_PROGRESS state changed >2018-09-21 12:06:58Z [133]: CREATE_COMPLETE state changed >2018-09-21 12:06:59Z [114]: CREATE_COMPLETE state changed >2018-09-21 12:06:59Z [55]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:00Z [overcloud.ComputeServiceChain]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:07:00Z [116]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:01Z [96]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:01Z [96]: CREATE_COMPLETE state changed >2018-09-21 12:07:01Z [overcloud.ComputeServiceChain]: CREATE_COMPLETE state changed >2018-09-21 12:07:02Z [143]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:03Z [overcloud.ComputeServiceChainRoleData]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:03Z [overcloud.ComputeServiceChainRoleData]: CREATE_COMPLETE state changed >2018-09-21 12:07:03Z [37]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:04Z [111]: CREATE_COMPLETE state changed >2018-09-21 12:07:05Z [overcloud.ComputeServiceNames]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:05Z [94]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:05Z [overcloud.ComputeServiceNames]: CREATE_COMPLETE state changed >2018-09-21 12:07:05Z [37]: CREATE_COMPLETE state changed >2018-09-21 12:07:05Z [116]: CREATE_COMPLETE state changed >2018-09-21 12:07:06Z [47]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:06Z [47]: CREATE_COMPLETE state changed >2018-09-21 12:07:08Z [42]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:08Z [42]: CREATE_COMPLETE state changed >2018-09-21 12:07:10Z [8]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:10Z [8]: CREATE_COMPLETE state changed >2018-09-21 12:07:11Z [80]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:11Z [80]: CREATE_COMPLETE state changed >2018-09-21 12:07:13Z [143]: CREATE_COMPLETE state changed >2018-09-21 12:07:13Z [59]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:14Z [54]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:15Z [7]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:15Z [94]: CREATE_COMPLETE state changed >2018-09-21 12:07:15Z [55]: CREATE_COMPLETE state changed >2018-09-21 12:07:15Z [7]: CREATE_COMPLETE state changed >2018-09-21 12:07:17Z [20]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:17Z [124]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:17Z [124]: CREATE_COMPLETE state changed >2018-09-21 12:07:18Z [91]: CREATE_COMPLETE state changed >2018-09-21 12:07:19Z [2]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:20Z [113]: CREATE_COMPLETE state changed >2018-09-21 12:07:20Z [97]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:21Z [97]: CREATE_COMPLETE state changed >2018-09-21 12:07:21Z [56]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:21Z [20]: CREATE_COMPLETE state changed >2018-09-21 12:07:22Z [58]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:24Z [90]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:25Z [70]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:26Z [85]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:26Z [85]: CREATE_COMPLETE state changed >2018-09-21 12:07:27Z [58]: CREATE_COMPLETE state changed >2018-09-21 12:07:27Z [90]: CREATE_COMPLETE state changed >2018-09-21 12:07:27Z [98]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:27Z [54]: CREATE_COMPLETE state changed >2018-09-21 12:07:28Z [79]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:28Z [59]: CREATE_COMPLETE state changed >2018-09-21 12:07:28Z [70]: CREATE_COMPLETE state changed >2018-09-21 12:07:29Z [79]: CREATE_COMPLETE state changed >2018-09-21 12:07:29Z [50]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:31Z [69]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:32Z [84]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:32Z [50]: CREATE_COMPLETE state changed >2018-09-21 12:07:32Z [69]: CREATE_COMPLETE state changed >2018-09-21 12:07:33Z [2]: CREATE_COMPLETE state changed >2018-09-21 12:07:33Z [130]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:34Z [84]: CREATE_COMPLETE state changed >2018-09-21 12:07:35Z [74]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:35Z [74]: CREATE_COMPLETE state changed >2018-09-21 12:07:36Z [67]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:36Z [67]: CREATE_COMPLETE state changed >2018-09-21 12:07:36Z [56]: CREATE_COMPLETE state changed >2018-09-21 12:07:37Z [106]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:37Z [106]: CREATE_COMPLETE state changed >2018-09-21 12:07:38Z [35]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:39Z [31]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:39Z [98]: CREATE_COMPLETE state changed >2018-09-21 12:07:39Z [31]: CREATE_COMPLETE state changed >2018-09-21 12:07:41Z [137]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:41Z [137]: CREATE_COMPLETE state changed >2018-09-21 12:07:41Z [130]: CREATE_COMPLETE state changed >2018-09-21 12:07:42Z [134]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:44Z [1]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:44Z [5]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:44Z [5]: CREATE_COMPLETE state changed >2018-09-21 12:07:45Z [138]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:45Z [138]: CREATE_COMPLETE state changed >2018-09-21 12:07:46Z [127]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:47Z [12]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:49Z [26]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:51Z [38]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:51Z [101]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:52Z [26]: CREATE_COMPLETE state changed >2018-09-21 12:07:52Z [38]: CREATE_COMPLETE state changed >2018-09-21 12:07:53Z [149]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:53Z [101]: CREATE_COMPLETE state changed >2018-09-21 12:07:54Z [134]: CREATE_COMPLETE state changed >2018-09-21 12:07:54Z [149]: CREATE_COMPLETE state changed >2018-09-21 12:07:55Z [19]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:55Z [19]: CREATE_COMPLETE state changed >2018-09-21 12:07:56Z [123]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:56Z [123]: CREATE_COMPLETE state changed >2018-09-21 12:07:57Z [28]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:57Z [35]: CREATE_COMPLETE state changed >2018-09-21 12:07:57Z [28]: CREATE_COMPLETE state changed >2018-09-21 12:07:58Z [118]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:58Z [118]: CREATE_COMPLETE state changed >2018-09-21 12:07:58Z [12]: CREATE_COMPLETE state changed >2018-09-21 12:07:59Z [99]: CREATE_IN_PROGRESS state changed >2018-09-21 12:07:59Z [99]: CREATE_COMPLETE state changed >2018-09-21 12:08:00Z [1]: CREATE_COMPLETE state changed >2018-09-21 12:08:01Z [48]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:01Z [48]: CREATE_COMPLETE state changed >2018-09-21 12:08:01Z [62]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:01Z [62]: CREATE_COMPLETE state changed >2018-09-21 12:08:03Z [95]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:03Z [22]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:04Z [22]: CREATE_COMPLETE state changed >2018-09-21 12:08:05Z [34]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:06Z [0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:06Z [127]: CREATE_COMPLETE state changed >2018-09-21 12:08:06Z [0]: CREATE_COMPLETE state changed >2018-09-21 12:08:07Z [3]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:08Z [148]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:09Z [68]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:11Z [4]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:12Z [148]: CREATE_COMPLETE state changed >2018-09-21 12:08:12Z [95]: CREATE_COMPLETE state changed >2018-09-21 12:08:13Z [6]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:13Z [6]: CREATE_COMPLETE state changed >2018-09-21 12:08:14Z [13]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:14Z [34]: CREATE_COMPLETE state changed >2018-09-21 12:08:15Z [82]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:15Z [68]: CREATE_COMPLETE state changed >2018-09-21 12:08:16Z [23]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:17Z [23]: CREATE_COMPLETE state changed >2018-09-21 12:08:17Z [33]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:17Z [33]: CREATE_COMPLETE state changed >2018-09-21 12:08:19Z [3]: CREATE_COMPLETE state changed >2018-09-21 12:08:19Z [82]: CREATE_COMPLETE state changed >2018-09-21 12:08:22Z [4]: CREATE_COMPLETE state changed >2018-09-21 12:08:22Z [13]: CREATE_COMPLETE state changed >2018-09-21 12:08:22Z [overcloud.ControllerServiceChain.ServiceChain]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:08:23Z [overcloud.ControllerServiceChain.ServiceChain]: CREATE_COMPLETE state changed >2018-09-21 12:08:50Z [overcloud.ControllerServiceChain.ExternalUpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:50Z [overcloud.ControllerServiceChain.ExternalUpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:08:51Z [overcloud.ControllerServiceChain.DeployStepsTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:51Z [overcloud.ControllerServiceChain.DeployStepsTasks]: CREATE_COMPLETE state changed >2018-09-21 12:08:51Z [overcloud.ControllerServiceChain.DockerPuppetTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:51Z [overcloud.ControllerServiceChain.DockerPuppetTasks]: CREATE_COMPLETE state changed >2018-09-21 12:08:51Z [overcloud.ControllerServiceChain.PuppetConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:51Z [overcloud.ControllerServiceChain.FastForwardUpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:51Z [overcloud.ControllerServiceChain.PuppetConfig]: CREATE_COMPLETE state changed >2018-09-21 12:08:52Z [overcloud.ControllerServiceChain.LoggingSourcesConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:52Z [overcloud.ControllerServiceChain.LoggingSourcesConfig]: CREATE_COMPLETE state changed >2018-09-21 12:08:52Z [overcloud.ControllerServiceChain.FastForwardUpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:08:52Z [overcloud.ControllerServiceChain.UpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:52Z [overcloud.ControllerServiceChain.ExternalDeployTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:52Z [overcloud.ControllerServiceChain.UpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:08:52Z [overcloud.ControllerServiceChain.ServiceConfigSettings]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:53Z [overcloud.ControllerServiceChain.ExternalDeployTasks]: CREATE_COMPLETE state changed >2018-09-21 12:08:53Z [overcloud.ControllerServiceChain.ServiceConfigSettings]: CREATE_COMPLETE state changed >2018-09-21 12:08:53Z [overcloud.ControllerServiceChain.LoggingGroupsConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:53Z [overcloud.ControllerServiceChain.LoggingGroupsConfig]: CREATE_COMPLETE state changed >2018-09-21 12:08:53Z [overcloud.ControllerServiceChain.MonitoringSubscriptionsConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:53Z [overcloud.ControllerServiceChain.MonitoringSubscriptionsConfig]: CREATE_COMPLETE state changed >2018-09-21 12:08:53Z [overcloud.ControllerServiceChain.WorkflowTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:54Z [overcloud.ControllerServiceChain.PostUpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:54Z [overcloud.ControllerServiceChain.WorkflowTasks]: CREATE_COMPLETE state changed >2018-09-21 12:08:54Z [overcloud.ControllerServiceChain.PostUpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:08:54Z [overcloud.ControllerServiceChain.ExternalPostDeployTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:54Z [overcloud.ControllerServiceChain.ExternalPostDeployTasks]: CREATE_COMPLETE state changed >2018-09-21 12:08:54Z [overcloud.ControllerServiceChain.DockerConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:54Z [overcloud.ControllerServiceChain.GlobalConfigSettings]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:54Z [overcloud.ControllerServiceChain.DockerConfig]: CREATE_COMPLETE state changed >2018-09-21 12:08:55Z [overcloud.ControllerServiceChain.FastForwardPostUpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:55Z [overcloud.ControllerServiceChain.GlobalConfigSettings]: CREATE_COMPLETE state changed >2018-09-21 12:08:55Z [overcloud.ControllerServiceChain.FastForwardPostUpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:08:55Z [overcloud.ControllerServiceChain.ServiceNames]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:55Z [overcloud.ControllerServiceChain.ServiceNames]: CREATE_COMPLETE state changed >2018-09-21 12:08:55Z [overcloud.ControllerServiceChain.PostUpdateTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:55Z [overcloud.ControllerServiceChain.PostUpdateTasks]: CREATE_COMPLETE state changed >2018-09-21 12:08:56Z [overcloud.ControllerServiceChain.PuppetStepConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:56Z [overcloud.ControllerServiceChain.CellV2Discovery]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:56Z [overcloud.ControllerServiceChain.CellV2Discovery]: CREATE_COMPLETE state changed >2018-09-21 12:08:56Z [overcloud.ControllerServiceChain.PuppetStepConfig]: CREATE_COMPLETE state changed >2018-09-21 12:08:56Z [overcloud.ControllerServiceChain.UpdateTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:56Z [overcloud.ControllerServiceChain.ExternalUpdateTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:56Z [overcloud.ControllerServiceChain.UpdateTasks]: CREATE_COMPLETE state changed >2018-09-21 12:08:56Z [overcloud.ControllerServiceChain.ExternalUpdateTasks]: CREATE_COMPLETE state changed >2018-09-21 12:08:57Z [overcloud.ControllerServiceChain.PreUpgradeRollingTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:57Z [overcloud.ControllerServiceChain.DockerConfigScripts]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:57Z [overcloud.ControllerServiceChain.PreUpgradeRollingTasks]: CREATE_COMPLETE state changed >2018-09-21 12:08:57Z [overcloud.ControllerServiceChain.ServiceServerMetadataHook]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:57Z [overcloud.ControllerServiceChain.ServiceServerMetadataHook]: CREATE_COMPLETE state changed >2018-09-21 12:08:57Z [overcloud.ControllerServiceChain.DockerConfigScripts]: CREATE_COMPLETE state changed >2018-09-21 12:08:57Z [overcloud.ControllerServiceChain.UpgradeBatchTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:57Z [overcloud.ControllerServiceChain.UpgradeBatchTasks]: CREATE_COMPLETE state changed >2018-09-21 12:08:58Z [overcloud.ControllerServiceChain.KollaConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:58Z [overcloud.ControllerServiceChain.KollaConfig]: CREATE_COMPLETE state changed >2018-09-21 12:08:58Z [overcloud.ControllerServiceChain.HostPrepTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:08:58Z [overcloud.ControllerServiceChain.HostPrepTasks]: CREATE_COMPLETE state changed >2018-09-21 12:08:58Z [overcloud.ControllerServiceChain]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:08:59Z [overcloud.ControllerServiceChain]: CREATE_COMPLETE state changed >2018-09-21 12:09:00Z [overcloud.ControllerServiceChainRoleData]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:01Z [overcloud.ControllerServiceChainRoleData]: CREATE_COMPLETE state changed >2018-09-21 12:09:01Z [overcloud.ComputeServiceConfigSettings]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:01Z [overcloud.ComputeServiceConfigSettings]: CREATE_COMPLETE state changed >2018-09-21 12:09:01Z [overcloud.ControllerServiceNames]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:01Z [overcloud.ControllerServiceNames]: CREATE_COMPLETE state changed >2018-09-21 12:09:02Z [overcloud.ComputeMergedConfigSettings]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:02Z [overcloud.ComputeMergedConfigSettings]: CREATE_COMPLETE state changed >2018-09-21 12:09:02Z [overcloud.Compute]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:02Z [overcloud.CephStorageServiceConfigSettings]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:02Z [overcloud.BlockStorageServiceConfigSettings]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:02Z [overcloud.CephStorageServiceConfigSettings]: CREATE_COMPLETE state changed >2018-09-21 12:09:02Z [overcloud.BlockStorageServiceConfigSettings]: CREATE_COMPLETE state changed >2018-09-21 12:09:03Z [overcloud.BlockStorageMergedConfigSettings]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:03Z [overcloud.BlockStorageMergedConfigSettings]: CREATE_COMPLETE state changed >2018-09-21 12:09:03Z [overcloud.BlockStorage]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:03Z [overcloud.ControllerServiceConfigSettings]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:03Z [overcloud.CephStorageMergedConfigSettings]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:03Z [overcloud.ObjectStorageServiceConfigSettings]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:03Z [overcloud.ControllerServiceConfigSettings]: CREATE_COMPLETE state changed >2018-09-21 12:09:03Z [overcloud.CephStorageMergedConfigSettings]: CREATE_COMPLETE state changed >2018-09-21 12:09:03Z [overcloud.ObjectStorageServiceConfigSettings]: CREATE_COMPLETE state changed >2018-09-21 12:09:03Z [overcloud.CephStorage]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:03Z [overcloud.ObjectStorageMergedConfigSettings]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:03Z [overcloud.Controller]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:03Z [overcloud.ObjectStorageMergedConfigSettings]: CREATE_COMPLETE state changed >2018-09-21 12:09:03Z [overcloud.ControllerMergedConfigSettings]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:03Z [overcloud.ControllerMergedConfigSettings]: CREATE_COMPLETE state changed >2018-09-21 12:09:04Z [overcloud.ObjectStorage]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:04Z [overcloud.CephStorage]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:09:04Z [overcloud.CephStorage]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:09:05Z [overcloud.Controller]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:09:05Z [overcloud.Controller]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:09:05Z [overcloud.BlockStorage]: CREATE_COMPLETE state changed >2018-09-21 12:09:05Z [overcloud.ObjectStorage]: CREATE_COMPLETE state changed >2018-09-21 12:09:06Z [overcloud.Compute]: UPDATE_IN_PROGRESS Stack UPDATE started >2018-09-21 12:09:06Z [overcloud.CephStorage]: UPDATE_IN_PROGRESS Stack UPDATE started >2018-09-21 12:09:06Z [overcloud.ObjectStorageNetworkHostnameMap]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:07Z [overcloud.ObjectStorageNetworkHostnameMap]: CREATE_COMPLETE state changed >2018-09-21 12:09:07Z [overcloud.BlockStorageServers]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:07Z [overcloud.BlockStorageServers]: CREATE_COMPLETE state changed >2018-09-21 12:09:07Z [overcloud.ObjectStorageIpListMap]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:07Z [overcloud.CephStorage.0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:07Z [overcloud.ObjectStorageServers]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:07Z [overcloud.Compute.0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:07Z [overcloud.BlockStorageNetworkHostnameMap]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:07Z [overcloud.Controller]: UPDATE_IN_PROGRESS Stack UPDATE started >2018-09-21 12:09:07Z [overcloud.BlockStorageNetworkHostnameMap]: CREATE_COMPLETE state changed >2018-09-21 12:09:07Z [overcloud.ObjectStorageServers]: CREATE_COMPLETE state changed >2018-09-21 12:09:07Z [overcloud.Controller.0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:07Z [overcloud.BlockStorageIpListMap]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:08Z [overcloud.ObjectStorageIpListMap]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:09:08Z [overcloud.ObjectStorageIpListMap.NetIpMapValue]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:08Z [overcloud.ObjectStorageIpListMap.NetIpMapValue]: CREATE_COMPLETE state changed >2018-09-21 12:09:09Z [overcloud.BlockStorageIpListMap]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:09:09Z [overcloud.BlockStorageIpListMap.NetIpMapValue]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:09Z [overcloud.BlockStorageIpListMap.NetIpMapValue]: CREATE_COMPLETE state changed >2018-09-21 12:09:09Z [overcloud.ObjectStorageIpListMap.EnabledServicesValue]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:09Z [overcloud.ObjectStorageIpListMap.EnabledServicesValue]: CREATE_COMPLETE state changed >2018-09-21 12:09:09Z [overcloud.ObjectStorageIpListMap]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:09:10Z [overcloud.BlockStorageIpListMap.EnabledServicesValue]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:10Z [overcloud.BlockStorageIpListMap.EnabledServicesValue]: CREATE_COMPLETE state changed >2018-09-21 12:09:10Z [overcloud.BlockStorageIpListMap]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:09:10Z [overcloud.ObjectStorageIpListMap]: CREATE_COMPLETE state changed >2018-09-21 12:09:11Z [overcloud.BlockStorageIpListMap]: CREATE_COMPLETE state changed >2018-09-21 12:09:13Z [overcloud.Compute.0]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:09:14Z [overcloud.CephStorage.0]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:09:14Z [overcloud.Compute.0.NodeUserData]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:15Z [overcloud.Controller.0]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:09:15Z [overcloud.CephStorage.0.RoleUserData]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:15Z [overcloud.Compute.0.RoleUserData]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:16Z [overcloud.CephStorage.0.DeploymentActions]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:16Z [overcloud.CephStorage.0.DeploymentActions]: CREATE_COMPLETE state changed >2018-09-21 12:09:17Z [overcloud.Compute.0.DeploymentActions]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:17Z [overcloud.Compute.0.DeploymentActions]: CREATE_COMPLETE state changed >2018-09-21 12:09:17Z [overcloud.Controller.0.RoleUserData]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:17Z [overcloud.CephStorage.0.RoleUserData]: CREATE_COMPLETE state changed >2018-09-21 12:09:17Z [overcloud.CephStorage.0.CephStorageUpgradeInitConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:17Z [overcloud.Compute.0.NodeUserData]: CREATE_COMPLETE state changed >2018-09-21 12:09:17Z [overcloud.CephStorage.0.CephStorageUpgradeInitConfig]: CREATE_COMPLETE state changed >2018-09-21 12:09:17Z [overcloud.Controller.0.NodeAdminUserData]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:18Z [overcloud.Compute.0.NovaComputeUpgradeInitConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:18Z [overcloud.Compute.0.NovaComputeUpgradeInitConfig]: CREATE_COMPLETE state changed >2018-09-21 12:09:18Z [overcloud.CephStorage.0.NodeUserData]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:19Z [overcloud.Compute.0.NodeAdminUserData]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:19Z [overcloud.Controller.0.RoleUserData]: CREATE_COMPLETE state changed >2018-09-21 12:09:19Z [overcloud.Controller.0.NodeUserData]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:19Z [overcloud.CephStorage.0.NodeAdminUserData]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:19Z [overcloud.Compute.0.RoleUserData]: CREATE_COMPLETE state changed >2018-09-21 12:09:20Z [overcloud.Controller.0.ControllerUpgradeInitConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:20Z [overcloud.Controller.0.ControllerUpgradeInitConfig]: CREATE_COMPLETE state changed >2018-09-21 12:09:20Z [overcloud.Controller.0.NodeAdminUserData]: CREATE_COMPLETE state changed >2018-09-21 12:09:20Z [overcloud.CephStorage.0.NodeUserData]: CREATE_COMPLETE state changed >2018-09-21 12:09:20Z [overcloud.Controller.0.DeploymentActions]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:20Z [overcloud.Controller.0.DeploymentActions]: CREATE_COMPLETE state changed >2018-09-21 12:09:21Z [overcloud.Controller.0.NodeUserData]: CREATE_COMPLETE state changed >2018-09-21 12:09:21Z [overcloud.Controller.0.UserData]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:21Z [overcloud.Controller.0.UserData]: CREATE_COMPLETE state changed >2018-09-21 12:09:21Z [overcloud.Controller.0.Controller]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:22Z [overcloud.CephStorage.0.NodeAdminUserData]: CREATE_COMPLETE state changed >2018-09-21 12:09:22Z [overcloud.CephStorage.0.UserData]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:22Z [overcloud.Compute.0.NodeAdminUserData]: CREATE_COMPLETE state changed >2018-09-21 12:09:22Z [overcloud.CephStorage.0.UserData]: CREATE_COMPLETE state changed >2018-09-21 12:09:22Z [overcloud.Compute.0.UserData]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:22Z [overcloud.CephStorage.0.CephStorage]: CREATE_IN_PROGRESS state changed >2018-09-21 12:09:22Z [overcloud.Compute.0.UserData]: CREATE_COMPLETE state changed >2018-09-21 12:09:22Z [overcloud.Compute.0.NovaCompute]: CREATE_IN_PROGRESS state changed >2018-09-21 12:12:42Z [overcloud.Controller.0.Controller]: CREATE_COMPLETE state changed >2018-09-21 12:12:43Z [overcloud.Controller.0.NetHostMap]: CREATE_IN_PROGRESS state changed >2018-09-21 12:12:43Z [overcloud.Controller.0.NetHostMap]: CREATE_COMPLETE state changed >2018-09-21 12:12:43Z [overcloud.Controller.0.StoragePort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:12:43Z [overcloud.Controller.0.ExternalPort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:12:43Z [overcloud.Controller.0.PreNetworkConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:12:44Z [overcloud.Controller.0.PreNetworkConfig]: CREATE_COMPLETE state changed >2018-09-21 12:12:44Z [overcloud.Controller.0.TenantPort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:12:44Z [overcloud.Controller.0.StorageMgmtPort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:12:44Z [overcloud.Controller.0.InternalApiPort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:12:44Z [overcloud.Controller.0.ManagementPort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:12:48Z [overcloud.CephStorage.0.CephStorage]: CREATE_COMPLETE state changed >2018-09-21 12:12:49Z [overcloud.Controller.0.ManagementPort]: CREATE_COMPLETE state changed >2018-09-21 12:12:51Z [overcloud.CephStorage.0.StoragePort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:12:52Z [overcloud.CephStorage.0.PreNetworkConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:12:52Z [overcloud.CephStorage.0.NetHostMap]: CREATE_IN_PROGRESS state changed >2018-09-21 12:12:52Z [overcloud.CephStorage.0.PreNetworkConfig]: CREATE_COMPLETE state changed >2018-09-21 12:12:52Z [overcloud.CephStorage.0.NetHostMap]: CREATE_COMPLETE state changed >2018-09-21 12:12:52Z [overcloud.CephStorage.0.ManagementPort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:12:52Z [overcloud.CephStorage.0.TenantPort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:12:52Z [overcloud.CephStorage.0.StorageMgmtPort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:12:54Z [overcloud.CephStorage.0.ExternalPort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:12:54Z [overcloud.CephStorage.0.InternalApiPort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:12:56Z [overcloud.Compute.0.NovaCompute]: CREATE_COMPLETE state changed >2018-09-21 12:12:56Z [overcloud.Controller.0.ExternalPort]: CREATE_COMPLETE state changed >2018-09-21 12:12:57Z [overcloud.CephStorage.0.TenantPort]: CREATE_COMPLETE state changed >2018-09-21 12:12:58Z [overcloud.Compute.0.ManagementPort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:12:58Z [overcloud.Compute.0.PreNetworkConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:12:59Z [overcloud.Compute.0.PreNetworkConfig]: CREATE_COMPLETE state changed >2018-09-21 12:12:59Z [overcloud.Compute.0.StoragePort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:12:59Z [overcloud.CephStorage.0.InternalApiPort]: CREATE_COMPLETE state changed >2018-09-21 12:12:59Z [overcloud.Compute.0.ExternalPort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:00Z [overcloud.Compute.0.TenantPort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:00Z [overcloud.Compute.0.InternalApiPort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:01Z [overcloud.Compute.0.NetHostMap]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:01Z [overcloud.Compute.0.StorageMgmtPort]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:01Z [overcloud.Compute.0.NetHostMap]: CREATE_COMPLETE state changed >2018-09-21 12:13:01Z [overcloud.CephStorage.0.ManagementPort]: CREATE_COMPLETE state changed >2018-09-21 12:13:04Z [overcloud.Controller.0.TenantPort]: CREATE_COMPLETE state changed >2018-09-21 12:13:04Z [overcloud.CephStorage.0.ExternalPort]: CREATE_COMPLETE state changed >2018-09-21 12:13:05Z [overcloud.Compute.0.ManagementPort]: CREATE_COMPLETE state changed >2018-09-21 12:13:06Z [overcloud.Controller.0.StorageMgmtPort]: CREATE_COMPLETE state changed >2018-09-21 12:13:06Z [overcloud.Controller.0.StoragePort]: CREATE_COMPLETE state changed >2018-09-21 12:13:06Z [overcloud.Compute.0.ExternalPort]: CREATE_COMPLETE state changed >2018-09-21 12:13:07Z [overcloud.Compute.0.StorageMgmtPort]: CREATE_COMPLETE state changed >2018-09-21 12:13:09Z [overcloud.Controller.0.InternalApiPort]: CREATE_COMPLETE state changed >2018-09-21 12:13:10Z [overcloud.CephStorage.0.StoragePort]: CREATE_COMPLETE state changed >2018-09-21 12:13:14Z [overcloud.Controller.0.NetIpMap]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:14Z [overcloud.CephStorage.0.StorageMgmtPort]: CREATE_COMPLETE state changed >2018-09-21 12:13:14Z [overcloud.Controller.0.NetworkConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:15Z [overcloud.Compute.0.InternalApiPort]: CREATE_COMPLETE state changed >2018-09-21 12:13:16Z [overcloud.Compute.0.TenantPort]: CREATE_COMPLETE state changed >2018-09-21 12:13:16Z [overcloud.Controller.0.NetIpMap]: CREATE_COMPLETE state changed >2018-09-21 12:13:16Z [overcloud.CephStorage.0.NetworkConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:16Z [overcloud.CephStorage.0.NetIpMap]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:16Z [overcloud.Controller.0.SshKnownHostsHostnames]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:16Z [overcloud.Controller.0.ControllerConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:17Z [overcloud.Compute.0.StoragePort]: CREATE_COMPLETE state changed >2018-09-21 12:13:17Z [overcloud.Controller.0.SshKnownHostsHostnames]: CREATE_COMPLETE state changed >2018-09-21 12:13:17Z [overcloud.Controller.0.NetworkConfig]: CREATE_COMPLETE state changed >2018-09-21 12:13:17Z [overcloud.Controller.0.NetworkDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:18Z [overcloud.Controller.0.ControllerConfig]: CREATE_COMPLETE state changed >2018-09-21 12:13:18Z [overcloud.CephStorage.0.NetIpMap]: CREATE_COMPLETE state changed >2018-09-21 12:13:19Z [overcloud.CephStorage.0.CephStorageConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:19Z [overcloud.CephStorage.0.CephStorageConfig]: CREATE_COMPLETE state changed >2018-09-21 12:13:19Z [overcloud.CephStorage.0.NetworkConfig]: CREATE_COMPLETE state changed >2018-09-21 12:13:19Z [overcloud.CephStorage.0.SshKnownHostsHostnames]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:19Z [overcloud.CephStorage.0.SshKnownHostsHostnames]: CREATE_COMPLETE state changed >2018-09-21 12:13:19Z [overcloud.CephStorage.0.NetworkDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:20Z [overcloud.Compute.0.NetworkConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:20Z [overcloud.Compute.0.NetIpMap]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:22Z [overcloud.Compute.0.NetworkConfig]: CREATE_COMPLETE state changed >2018-09-21 12:13:22Z [overcloud.Compute.0.NetworkDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:23Z [overcloud.Compute.0.NetIpMap]: CREATE_COMPLETE state changed >2018-09-21 12:13:23Z [overcloud.Compute.0.NovaComputeConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:23Z [overcloud.Compute.0.SshKnownHostsHostnames]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:23Z [overcloud.Compute.0.NovaComputeConfig]: CREATE_COMPLETE state changed >2018-09-21 12:13:23Z [overcloud.Compute.0.SshKnownHostsHostnames]: CREATE_COMPLETE state changed >2018-09-21 12:13:28Z [overcloud.Controller.0.NetworkDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:13:28Z [overcloud.Controller.0.ControllerUpgradeInitDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:29Z [overcloud.Controller.0.NodeTLSCAData]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:29Z [overcloud.CephStorage.0.NetworkDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:13:30Z [overcloud.CephStorage.0.CephStorageUpgradeInitDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:30Z [overcloud.CephStorage.0.NodeTLSCAData]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:33Z [overcloud.Compute.0.NetworkDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:13:34Z [overcloud.Compute.0.NovaComputeUpgradeInitDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:34Z [overcloud.Compute.0.NodeTLSCAData]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:37Z [overcloud.CephStorage.0.CephStorageUpgradeInitDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:13:38Z [overcloud.CephStorage.0.CephStorageDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:39Z [overcloud.Controller.0.ControllerUpgradeInitDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:13:39Z [overcloud.CephStorage.0.NodeTLSCAData]: CREATE_COMPLETE state changed >2018-09-21 12:13:40Z [overcloud.Controller.0.ControllerDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:43Z [overcloud.Compute.0.NovaComputeUpgradeInitDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:13:43Z [overcloud.Compute.0.NovaComputeDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:44Z [overcloud.Controller.0.NodeTLSCAData]: CREATE_COMPLETE state changed >2018-09-21 12:13:45Z [overcloud.CephStorage.0.CephStorageDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:13:45Z [overcloud.CephStorage.0.CephStorageExtraConfigPre]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:45Z [overcloud.CephStorage.0.SshHostPubKey]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:45Z [overcloud.Compute.0.NodeTLSCAData]: CREATE_COMPLETE state changed >2018-09-21 12:13:45Z [overcloud.CephStorage.0.SshHostPubKey]: CREATE_COMPLETE state changed >2018-09-21 12:13:47Z [overcloud.CephStorage.0.CephStorageExtraConfigPre]: CREATE_COMPLETE state changed >2018-09-21 12:13:47Z [overcloud.Controller.0.ControllerDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:13:47Z [overcloud.CephStorage.0.NodeExtraConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:47Z [overcloud.Controller.0.ControllerExtraConfigPre]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:47Z [overcloud.Controller.0.SshHostPubKey]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:47Z [overcloud.Controller.0.SshHostPubKey]: CREATE_COMPLETE state changed >2018-09-21 12:13:49Z [overcloud.CephStorage.0.NodeExtraConfig]: CREATE_COMPLETE state changed >2018-09-21 12:13:49Z [overcloud.CephStorage.0]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:13:49Z [overcloud.Controller.0.ControllerExtraConfigPre]: CREATE_COMPLETE state changed >2018-09-21 12:13:49Z [overcloud.Controller.0.NodeExtraConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:49Z [overcloud.Compute.0.NovaComputeDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:13:49Z [overcloud.Compute.0.ComputeExtraConfigPre]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:49Z [overcloud.Compute.0.SshHostPubKey]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:50Z [overcloud.Compute.0.SshHostPubKey]: CREATE_COMPLETE state changed >2018-09-21 12:13:50Z [overcloud.CephStorage.0]: CREATE_COMPLETE state changed >2018-09-21 12:13:50Z [overcloud.CephStorage]: UPDATE_COMPLETE Stack UPDATE completed successfully >2018-09-21 12:13:51Z [overcloud.CephStorage]: CREATE_COMPLETE state changed >2018-09-21 12:13:51Z [overcloud.Controller.0.NodeExtraConfig]: CREATE_COMPLETE state changed >2018-09-21 12:13:51Z [overcloud.Controller.0]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:13:51Z [overcloud.Compute.0.ComputeExtraConfigPre]: CREATE_COMPLETE state changed >2018-09-21 12:13:51Z [overcloud.Compute.0.NodeExtraConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:51Z [overcloud.CephStorageNetworkHostnameMap]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:51Z [overcloud.CephStorageNetworkHostnameMap]: CREATE_COMPLETE state changed >2018-09-21 12:13:52Z [overcloud.CephStorageIpListMap]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:52Z [overcloud.Controller.0]: CREATE_COMPLETE state changed >2018-09-21 12:13:52Z [overcloud.CephStorageServers]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:52Z [overcloud.Controller]: UPDATE_COMPLETE Stack UPDATE completed successfully >2018-09-21 12:13:52Z [overcloud.CephStorageServers]: CREATE_COMPLETE state changed >2018-09-21 12:13:52Z [overcloud.CephStorageIpListMap]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:13:52Z [overcloud.CephStorageIpListMap.EnabledServicesValue]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:52Z [overcloud.CephStorageIpListMap.EnabledServicesValue]: CREATE_COMPLETE state changed >2018-09-21 12:13:53Z [overcloud.Controller]: CREATE_COMPLETE state changed >2018-09-21 12:13:53Z [overcloud.Compute.0.NodeExtraConfig]: CREATE_COMPLETE state changed >2018-09-21 12:13:53Z [overcloud.Compute.0]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:13:53Z [overcloud.ControllerNetworkHostnameMap]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:53Z [overcloud.ControllerNetworkHostnameMap]: CREATE_COMPLETE state changed >2018-09-21 12:13:53Z [overcloud.CephStorageIpListMap.NetIpMapValue]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:54Z [overcloud.CephStorageIpListMap.NetIpMapValue]: CREATE_COMPLETE state changed >2018-09-21 12:13:54Z [overcloud.AllNodesValidationConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:54Z [overcloud.CephStorageIpListMap]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:13:54Z [overcloud.ControllerIpListMap]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:54Z [overcloud.ControllerServers]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:54Z [overcloud.ControllerServers]: CREATE_COMPLETE state changed >2018-09-21 12:13:54Z [overcloud.Compute.0]: CREATE_COMPLETE state changed >2018-09-21 12:13:54Z [overcloud.Compute]: UPDATE_COMPLETE Stack UPDATE completed successfully >2018-09-21 12:13:54Z [overcloud.CephStorageIpListMap]: CREATE_COMPLETE state changed >2018-09-21 12:13:54Z [overcloud.AllNodesValidationConfig]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:13:55Z [overcloud.ControllerIpListMap]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:13:55Z [overcloud.AllNodesValidationConfig.AllNodesValidationsImpl]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:55Z [overcloud.AllNodesValidationConfig.AllNodesValidationsImpl]: CREATE_COMPLETE state changed >2018-09-21 12:13:55Z [overcloud.AllNodesValidationConfig]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:13:55Z [overcloud.ControllerIpListMap.EnabledServicesValue]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:55Z [overcloud.Compute]: CREATE_COMPLETE state changed >2018-09-21 12:13:55Z [overcloud.ControllerIpListMap.EnabledServicesValue]: CREATE_COMPLETE state changed >2018-09-21 12:13:56Z [overcloud.SshKnownHostsHostnames]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:56Z [overcloud.hostsConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:56Z [overcloud.SshKnownHostsHostnames]: CREATE_COMPLETE state changed >2018-09-21 12:13:56Z [overcloud.ComputeServers]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:56Z [overcloud.ComputeServers]: CREATE_COMPLETE state changed >2018-09-21 12:13:56Z [overcloud.SshKnownHostsConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:56Z [overcloud.ControllerIpListMap.NetIpMapValue]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:56Z [overcloud.ControllerIpListMap.NetIpMapValue]: CREATE_COMPLETE state changed >2018-09-21 12:13:56Z [overcloud.ServerOsCollectConfigData]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:56Z [overcloud.ServerOsCollectConfigData]: CREATE_COMPLETE state changed >2018-09-21 12:13:56Z [overcloud.ControllerIpListMap]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:13:56Z [overcloud.BlacklistedIpAddresses]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:56Z [overcloud.ServerIdMap]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:56Z [overcloud.DeployedServerEnvironment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:56Z [overcloud.ServerIdMap]: CREATE_COMPLETE state changed >2018-09-21 12:13:56Z [overcloud.DeployedServerEnvironment]: CREATE_COMPLETE state changed >2018-09-21 12:13:57Z [overcloud.AllNodesValidationConfig]: CREATE_COMPLETE state changed >2018-09-21 12:13:57Z [overcloud.BlacklistedHostnames]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:57Z [overcloud.ComputeNetworkHostnameMap]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:57Z [overcloud.ControllerIpListMap]: CREATE_COMPLETE state changed >2018-09-21 12:13:57Z [overcloud.ComputeNetworkHostnameMap]: CREATE_COMPLETE state changed >2018-09-21 12:13:57Z [overcloud.BlacklistedIpAddresses]: CREATE_COMPLETE state changed >2018-09-21 12:13:57Z [overcloud.SshKnownHostsConfig]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:13:57Z [hostsConfigImpl]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:57Z [overcloud.ComputeIpListMap]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:57Z [hostsConfigImpl]: CREATE_COMPLETE state changed >2018-09-21 12:13:57Z [overcloud.SshKnownHostsConfig.SSHKnownHostsConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:57Z [overcloud.hostsConfig]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:13:57Z [overcloud.SshKnownHostsConfig.SSHKnownHostsConfig]: CREATE_COMPLETE state changed >2018-09-21 12:13:57Z [overcloud.SshKnownHostsConfig]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:13:58Z [overcloud.ComputeIpListMap]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:13:58Z [overcloud.hostsConfig]: CREATE_COMPLETE state changed >2018-09-21 12:13:58Z [overcloud.SshKnownHostsConfig]: CREATE_COMPLETE state changed >2018-09-21 12:13:58Z [overcloud.ComputeIpListMap.NetIpMapValue]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:58Z [overcloud.ComputeIpListMap.NetIpMapValue]: CREATE_COMPLETE state changed >2018-09-21 12:13:58Z [overcloud.ComputeHostsDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:58Z [overcloud.CephStorageHostsDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:58Z [overcloud.ObjectStorageHostsDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:58Z [overcloud.BlockStorageHostsDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:59Z [overcloud.ControllerHostsDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:59Z [overcloud.ObjectStorageSshKnownHostsDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:59Z [overcloud.BlockStorageSshKnownHostsDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:59Z [overcloud.CephStorageSshKnownHostsDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:59Z [overcloud.ControllerSshKnownHostsDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:13:59Z [overcloud.ComputeHostsDeployment]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:13:59Z [overcloud.ComputeSshKnownHostsDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:00Z [overcloud.ControllerSshKnownHostsDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:00Z [overcloud.ComputeSshKnownHostsDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:00Z [overcloud.ObjectStorageSshKnownHostsDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:00Z [overcloud.BlockStorageSshKnownHostsDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:00Z [overcloud.ComputeIpListMap.EnabledServicesValue]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:00Z [overcloud.ComputeIpListMap.EnabledServicesValue]: CREATE_COMPLETE state changed >2018-09-21 12:14:00Z [overcloud.ComputeIpListMap]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:01Z [overcloud.CephStorageSshKnownHostsDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:01Z [overcloud.CephStorageHostsDeployment]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:01Z [overcloud.ComputeIpListMap]: CREATE_COMPLETE state changed >2018-09-21 12:14:01Z [overcloud.CephStorageHostsDeployment.0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:01Z [overcloud.BlockStorageHostsDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:01Z [overcloud.ControllerHostsDeployment]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:01Z [overcloud.ControllerHostsDeployment.0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:01Z [overcloud.ObjectStorageHostsDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:01Z [overcloud.ComputeHostsDeployment.0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:02Z [overcloud.allNodesConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:03Z [overcloud.CephStorageHostsDeployment.0]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:03Z [overcloud.CephStorageHostsDeployment.0.TripleOServer]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:03Z [overcloud.ControllerHostsDeployment.0]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:03Z [overcloud.ControllerHostsDeployment.0.TripleOServer]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:04Z [overcloud.ComputeHostsDeployment.0]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:04Z [overcloud.ComputeHostsDeployment.0.TripleOServer]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:05Z [overcloud.allNodesConfig]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:05Z [overcloud.allNodesConfig.allNodesConfigValue]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:05Z [overcloud.allNodesConfig.allNodesConfigValue]: CREATE_COMPLETE state changed >2018-09-21 12:14:06Z [overcloud.allNodesConfig.allNodesConfigImpl]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:06Z [overcloud.allNodesConfig.allNodesConfigImpl]: CREATE_COMPLETE state changed >2018-09-21 12:14:06Z [overcloud.allNodesConfig]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:06Z [overcloud.ControllerHostsDeployment.0.TripleOServer]: CREATE_COMPLETE state changed >2018-09-21 12:14:06Z [overcloud.ControllerHostsDeployment.0.TripleOSoftwareDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:07Z [overcloud.ControllerHostsDeployment.0.TripleOSoftwareDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:07Z [overcloud.ControllerHostsDeployment.0.TripleODeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:07Z [overcloud.ControllerHostsDeployment.0.TripleODeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:07Z [overcloud.ControllerHostsDeployment.0]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:07Z [overcloud.ControllerHostsDeployment.0]: CREATE_COMPLETE state changed >2018-09-21 12:14:07Z [overcloud.ComputeHostsDeployment.0.TripleOServer]: CREATE_COMPLETE state changed >2018-09-21 12:14:07Z [overcloud.allNodesConfig]: CREATE_COMPLETE state changed >2018-09-21 12:14:07Z [overcloud.ControllerHostsDeployment]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:07Z [overcloud.CephStorageHostsDeployment.0.TripleOServer]: CREATE_COMPLETE state changed >2018-09-21 12:14:07Z [overcloud.ComputeHostsDeployment.0.TripleOSoftwareDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:07Z [overcloud.CephStorageHostsDeployment.0.TripleOSoftwareDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:07Z [overcloud.ComputeHostsDeployment.0.TripleOSoftwareDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:07Z [overcloud.CephStorageHostsDeployment.0.TripleOSoftwareDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:07Z [overcloud.ComputeHostsDeployment.0.TripleODeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:07Z [overcloud.ComputeHostsDeployment.0.TripleODeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:07Z [overcloud.CephStorageHostsDeployment.0.TripleODeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:07Z [overcloud.ComputeHostsDeployment.0]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:07Z [overcloud.CephStorageHostsDeployment.0.TripleODeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:07Z [overcloud.CephStorageHostsDeployment.0]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:07Z [overcloud.ControllerHostsDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:08Z [overcloud.CephStorageHostsDeployment.0]: CREATE_COMPLETE state changed >2018-09-21 12:14:08Z [overcloud.CephStorageHostsDeployment]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:08Z [overcloud.ComputeHostsDeployment.0]: CREATE_COMPLETE state changed >2018-09-21 12:14:08Z [overcloud.ComputeHostsDeployment]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:08Z [overcloud.CephStorageHostsDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:09Z [overcloud.ComputeHostsDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:09Z [overcloud.ComputeAllNodesDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:09Z [overcloud.ControllerAllNodesDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:09Z [overcloud.BlockStorageAllNodesDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:09Z [overcloud.CephStorageAllNodesDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:10Z [overcloud.ObjectStorageAllNodesDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:10Z [overcloud.ComputeAllNodesDeployment]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:10Z [overcloud.ControllerAllNodesDeployment]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:11Z [overcloud.CephStorageAllNodesDeployment]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:11Z [overcloud.ControllerAllNodesDeployment.0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:12Z [overcloud.CephStorageAllNodesDeployment.0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:12Z [overcloud.BlockStorageAllNodesDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:12Z [overcloud.ComputeAllNodesDeployment.0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:12Z [overcloud.BlockStorageAllNodesValidationDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:14Z [overcloud.ControllerAllNodesDeployment.0]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:14Z [overcloud.CephStorageAllNodesDeployment.0]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:14Z [overcloud.BlockStorageAllNodesValidationDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:14Z [overcloud.ComputeAllNodesDeployment.0]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:14Z [overcloud.ObjectStorageAllNodesDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:14Z [overcloud.ControllerAllNodesDeployment.0.TripleOServer]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:15Z [overcloud.ObjectStorageAllNodesValidationDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:15Z [overcloud.ComputeAllNodesDeployment.0.TripleOServer]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:16Z [overcloud.CephStorageAllNodesDeployment.0.TripleOServer]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:17Z [overcloud.ObjectStorageAllNodesValidationDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:17Z [overcloud.ControllerAllNodesDeployment.0.TripleOServer]: CREATE_COMPLETE state changed >2018-09-21 12:14:17Z [overcloud.ControllerAllNodesDeployment.0.TripleOSoftwareDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:17Z [overcloud.ComputeAllNodesDeployment.0.TripleOServer]: CREATE_COMPLETE state changed >2018-09-21 12:14:17Z [overcloud.ControllerAllNodesDeployment.0.TripleOSoftwareDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:17Z [overcloud.ComputeAllNodesDeployment.0.TripleOSoftwareDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:17Z [overcloud.ControllerAllNodesDeployment.0.TripleODeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:17Z [overcloud.ControllerAllNodesDeployment.0.TripleODeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:17Z [overcloud.ControllerAllNodesDeployment.0]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:17Z [overcloud.ControllerAllNodesDeployment.0]: CREATE_COMPLETE state changed >2018-09-21 12:14:17Z [overcloud.ControllerAllNodesDeployment]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:17Z [overcloud.ComputeAllNodesDeployment.0.TripleOSoftwareDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:17Z [overcloud.ComputeAllNodesDeployment.0.TripleODeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:18Z [overcloud.ComputeAllNodesDeployment.0.TripleODeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:18Z [overcloud.ComputeAllNodesDeployment.0]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:18Z [overcloud.CephStorageAllNodesDeployment.0.TripleOServer]: CREATE_COMPLETE state changed >2018-09-21 12:14:18Z [overcloud.CephStorageAllNodesDeployment.0.TripleOSoftwareDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:18Z [overcloud.CephStorageAllNodesDeployment.0.TripleOSoftwareDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:18Z [overcloud.CephStorageAllNodesDeployment.0.TripleODeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:18Z [overcloud.CephStorageAllNodesDeployment.0.TripleODeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:18Z [overcloud.CephStorageAllNodesDeployment.0]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:18Z [overcloud.ControllerAllNodesDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:18Z [overcloud.CephStorageAllNodesDeployment.0]: CREATE_COMPLETE state changed >2018-09-21 12:14:18Z [overcloud.CephStorageAllNodesDeployment]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:18Z [overcloud.ComputeAllNodesDeployment.0]: CREATE_COMPLETE state changed >2018-09-21 12:14:18Z [overcloud.ComputeAllNodesDeployment]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:19Z [overcloud.CephStorageAllNodesDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:19Z [overcloud.ControllerAllNodesValidationDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:19Z [overcloud.CephStorageAllNodesValidationDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:19Z [overcloud.ComputeAllNodesDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:19Z [overcloud.ComputeAllNodesValidationDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:20Z [overcloud.ControllerAllNodesValidationDeployment]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:20Z [overcloud.ControllerAllNodesValidationDeployment.0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:20Z [overcloud.CephStorageAllNodesValidationDeployment]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:20Z [overcloud.CephStorageAllNodesValidationDeployment.0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:20Z [overcloud.ComputeAllNodesValidationDeployment]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:20Z [overcloud.ComputeAllNodesValidationDeployment.0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:22Z [overcloud.ControllerAllNodesValidationDeployment.0]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:22Z [overcloud.ControllerAllNodesValidationDeployment.0.TripleOServer]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:22Z [overcloud.CephStorageAllNodesValidationDeployment.0]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:22Z [overcloud.CephStorageAllNodesValidationDeployment.0.TripleOServer]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:22Z [overcloud.ComputeAllNodesValidationDeployment.0]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:23Z [overcloud.ComputeAllNodesValidationDeployment.0.TripleOServer]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:25Z [overcloud.ControllerAllNodesValidationDeployment.0.TripleOServer]: CREATE_COMPLETE state changed >2018-09-21 12:14:25Z [overcloud.ControllerAllNodesValidationDeployment.0.TripleOSoftwareDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:25Z [overcloud.ControllerAllNodesValidationDeployment.0.TripleOSoftwareDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:25Z [overcloud.ControllerAllNodesValidationDeployment.0.TripleODeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:25Z [overcloud.ControllerAllNodesValidationDeployment.0.TripleODeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:25Z [overcloud.ControllerAllNodesValidationDeployment.0]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:25Z [overcloud.CephStorageAllNodesValidationDeployment.0.TripleOServer]: CREATE_COMPLETE state changed >2018-09-21 12:14:25Z [overcloud.ComputeAllNodesValidationDeployment.0.TripleOServer]: CREATE_COMPLETE state changed >2018-09-21 12:14:25Z [overcloud.ComputeAllNodesValidationDeployment.0.TripleOSoftwareDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:25Z [overcloud.CephStorageAllNodesValidationDeployment.0.TripleOSoftwareDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:25Z [overcloud.ComputeAllNodesValidationDeployment.0.TripleOSoftwareDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:25Z [overcloud.CephStorageAllNodesValidationDeployment.0.TripleOSoftwareDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:25Z [overcloud.ComputeAllNodesValidationDeployment.0.TripleODeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:25Z [overcloud.ComputeAllNodesValidationDeployment.0.TripleODeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:25Z [overcloud.CephStorageAllNodesValidationDeployment.0.TripleODeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:25Z [overcloud.ComputeAllNodesValidationDeployment.0]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:26Z [overcloud.CephStorageAllNodesValidationDeployment.0.TripleODeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:26Z [overcloud.CephStorageAllNodesValidationDeployment.0]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:26Z [overcloud.ControllerAllNodesValidationDeployment.0]: CREATE_COMPLETE state changed >2018-09-21 12:14:26Z [overcloud.ControllerAllNodesValidationDeployment]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:26Z [overcloud.CephStorageAllNodesValidationDeployment.0]: CREATE_COMPLETE state changed >2018-09-21 12:14:26Z [overcloud.CephStorageAllNodesValidationDeployment]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:26Z [overcloud.ControllerAllNodesValidationDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:27Z [overcloud.ComputeAllNodesValidationDeployment.0]: CREATE_COMPLETE state changed >2018-09-21 12:14:27Z [overcloud.ComputeAllNodesValidationDeployment]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:27Z [overcloud.CephStorageAllNodesValidationDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:27Z [overcloud.ComputeAllNodesValidationDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:28Z [overcloud.AllNodesExtraConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:28Z [overcloud.AllNodesExtraConfig]: CREATE_COMPLETE state changed >2018-09-21 12:14:28Z [overcloud.AllNodesDeploySteps]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:30Z [overcloud.AllNodesDeploySteps]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:32Z [overcloud.AllNodesDeploySteps.ExternalPostDeployTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:32Z [overcloud.AllNodesDeploySteps.ExternalPostDeployTasks]: CREATE_COMPLETE state changed >2018-09-21 12:14:33Z [overcloud.AllNodesDeploySteps.ObjectStorageArtifactsConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:34Z [overcloud.AllNodesDeploySteps.ObjectStorageArtifactsConfig]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:34Z [overcloud.AllNodesDeploySteps.ObjectStorageArtifactsConfig.DeployArtifacts]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:34Z [overcloud.AllNodesDeploySteps.ExternalDeployTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:34Z [overcloud.AllNodesDeploySteps.ObjectStorageArtifactsConfig.DeployArtifacts]: CREATE_COMPLETE state changed >2018-09-21 12:14:34Z [overcloud.AllNodesDeploySteps.ExternalDeployTasks]: CREATE_COMPLETE state changed >2018-09-21 12:14:34Z [overcloud.AllNodesDeploySteps.ObjectStorageArtifactsConfig]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:35Z [overcloud.AllNodesDeploySteps.ObjectStorageArtifactsConfig]: CREATE_COMPLETE state changed >2018-09-21 12:14:35Z [overcloud.AllNodesDeploySteps.ControllerArtifactsConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:35Z [overcloud.AllNodesDeploySteps.ObjectStorageArtifactsDeploy]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:36Z [overcloud.AllNodesDeploySteps.ControllerArtifactsConfig]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:36Z [overcloud.AllNodesDeploySteps.BootstrapServerId]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:36Z [overcloud.AllNodesDeploySteps.ControllerArtifactsConfig.DeployArtifacts]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:36Z [overcloud.AllNodesDeploySteps.BootstrapServerId]: CREATE_COMPLETE state changed >2018-09-21 12:14:36Z [overcloud.AllNodesDeploySteps.ControllerArtifactsConfig.DeployArtifacts]: CREATE_COMPLETE state changed >2018-09-21 12:14:36Z [overcloud.AllNodesDeploySteps.ControllerArtifactsConfig]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:37Z [overcloud.AllNodesDeploySteps.ComputeHostPrepConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:37Z [overcloud.AllNodesDeploySteps.ComputeHostPrepConfig]: CREATE_COMPLETE state changed >2018-09-21 12:14:37Z [overcloud.AllNodesDeploySteps.ObjectStorageHostPrepConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:37Z [overcloud.AllNodesDeploySteps.ObjectStorageHostPrepConfig]: CREATE_COMPLETE state changed >2018-09-21 12:14:37Z [overcloud.AllNodesDeploySteps.CephStorageHostPrepConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:37Z [overcloud.AllNodesDeploySteps.ComputeHostPrepDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:37Z [overcloud.AllNodesDeploySteps.ObjectStorageHostPrepDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:37Z [overcloud.AllNodesDeploySteps.CephStorageHostPrepConfig]: CREATE_COMPLETE state changed >2018-09-21 12:14:37Z [overcloud.AllNodesDeploySteps.ComputeArtifactsConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:37Z [overcloud.AllNodesDeploySteps.BlockStorageHostPrepConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:37Z [overcloud.AllNodesDeploySteps.BlockStorageHostPrepConfig]: CREATE_COMPLETE state changed >2018-09-21 12:14:38Z [overcloud.AllNodesDeploySteps.ObjectStorageArtifactsDeploy]: CREATE_COMPLETE state changed >2018-09-21 12:14:38Z [overcloud.AllNodesDeploySteps.BlockStorageHostPrepDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:38Z [overcloud.AllNodesDeploySteps.ComputeHostPrepDeployment]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:38Z [overcloud.AllNodesDeploySteps.CephStorageHostPrepDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:38Z [overcloud.AllNodesDeploySteps.ControllerHostPrepConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:38Z [overcloud.AllNodesDeploySteps.ControllerArtifactsConfig]: CREATE_COMPLETE state changed >2018-09-21 12:14:38Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:38Z [overcloud.AllNodesDeploySteps.ControllerHostPrepConfig]: CREATE_COMPLETE state changed >2018-09-21 12:14:39Z [overcloud.AllNodesDeploySteps.ComputeArtifactsConfig]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:39Z [overcloud.AllNodesDeploySteps.ControllerArtifactsDeploy]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:39Z [overcloud.AllNodesDeploySteps.ComputeHostPrepDeployment.0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:39Z [overcloud.AllNodesDeploySteps.ExternalUpdateTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:40Z [overcloud.AllNodesDeploySteps.CephStorageHostPrepDeployment]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:40Z [overcloud.AllNodesDeploySteps.ExternalUpdateTasks]: CREATE_COMPLETE state changed >2018-09-21 12:14:40Z [overcloud.AllNodesDeploySteps.ObjectStorageHostPrepDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:40Z [overcloud.AllNodesDeploySteps.ComputeArtifactsConfig.DeployArtifacts]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:40Z [overcloud.AllNodesDeploySteps.ObjectStoragePreConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:40Z [overcloud.AllNodesDeploySteps.ComputeArtifactsConfig.DeployArtifacts]: CREATE_COMPLETE state changed >2018-09-21 12:14:40Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsConfig]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:40Z [overcloud.AllNodesDeploySteps.ControllerHostPrepDeployment]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:40Z [overcloud.AllNodesDeploySteps.RoleConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:40Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsConfig.DeployArtifacts]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:40Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsConfig.DeployArtifacts]: CREATE_COMPLETE state changed >2018-09-21 12:14:40Z [overcloud.AllNodesDeploySteps.RoleConfig]: CREATE_COMPLETE state changed >2018-09-21 12:14:40Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsConfig]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:41Z [overcloud.AllNodesDeploySteps.BlockStorageHostPrepDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:41Z [overcloud.AllNodesDeploySteps.ControllerArtifactsDeploy]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:41Z [overcloud.AllNodesDeploySteps.ObjectStoragePreConfig]: CREATE_COMPLETE state changed >2018-09-21 12:14:41Z [overcloud.AllNodesDeploySteps.ComputeArtifactsConfig]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:41Z [overcloud.AllNodesDeploySteps.CephStorageHostPrepDeployment.0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:41Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsConfig]: CREATE_COMPLETE state changed >2018-09-21 12:14:41Z [overcloud.AllNodesDeploySteps.ComputeArtifactsConfig]: CREATE_COMPLETE state changed >2018-09-21 12:14:42Z [overcloud.AllNodesDeploySteps.BlockStoragePreConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:42Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsDeploy]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:42Z [overcloud.AllNodesDeploySteps.BlockStoragePreConfig]: CREATE_COMPLETE state changed >2018-09-21 12:14:42Z [overcloud.AllNodesDeploySteps.ComputeArtifactsDeploy]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:42Z [overcloud.AllNodesDeploySteps.ControllerHostPrepDeployment]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:42Z [overcloud.AllNodesDeploySteps.BlockStorageArtifactsConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:43Z [overcloud.AllNodesDeploySteps.ExternalUpgradeTasks]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:43Z [overcloud.AllNodesDeploySteps.ExternalUpgradeTasks]: CREATE_COMPLETE state changed >2018-09-21 12:14:43Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsDeploy]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:43Z [overcloud.AllNodesDeploySteps.ControllerArtifactsDeploy.0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:44Z [overcloud.AllNodesDeploySteps.ComputeArtifactsDeploy]: CREATE_IN_PROGRESS Stack CREATE started >2018-09-21 12:14:44Z [overcloud.AllNodesDeploySteps.ControllerHostPrepDeployment.0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:45Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsDeploy.0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:46Z [overcloud.AllNodesDeploySteps.ComputeArtifactsDeploy.0]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:47Z [DeployArtifacts]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:47Z [DeployArtifacts]: CREATE_COMPLETE state changed >2018-09-21 12:14:47Z [overcloud.AllNodesDeploySteps.BlockStorageArtifactsConfig]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:48Z [overcloud.AllNodesDeploySteps.BlockStorageArtifactsConfig]: CREATE_COMPLETE state changed >2018-09-21 12:14:49Z [overcloud.AllNodesDeploySteps.ComputeHostPrepDeployment.0]: CREATE_COMPLETE state changed >2018-09-21 12:14:49Z [overcloud.AllNodesDeploySteps.ComputeHostPrepDeployment]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:49Z [overcloud.AllNodesDeploySteps.BlockStorageArtifactsDeploy]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:49Z [overcloud.AllNodesDeploySteps.ComputeHostPrepDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:51Z [overcloud.AllNodesDeploySteps.ComputePreConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:51Z [overcloud.AllNodesDeploySteps.ComputePreConfig]: CREATE_COMPLETE state changed >2018-09-21 12:14:52Z [overcloud.AllNodesDeploySteps.BlockStorageArtifactsDeploy]: CREATE_COMPLETE state changed >2018-09-21 12:14:53Z [overcloud.AllNodesDeploySteps.ControllerArtifactsDeploy.0]: CREATE_COMPLETE state changed >2018-09-21 12:14:53Z [overcloud.AllNodesDeploySteps.CephStorageHostPrepDeployment.0]: CREATE_COMPLETE state changed >2018-09-21 12:14:53Z [overcloud.AllNodesDeploySteps.ControllerArtifactsDeploy]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:53Z [overcloud.AllNodesDeploySteps.CephStorageHostPrepDeployment]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:54Z [overcloud.AllNodesDeploySteps.ControllerArtifactsDeploy]: CREATE_COMPLETE state changed >2018-09-21 12:14:54Z [overcloud.AllNodesDeploySteps.ControllerHostPrepDeployment.0]: CREATE_COMPLETE state changed >2018-09-21 12:14:54Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsDeploy.0]: CREATE_COMPLETE state changed >2018-09-21 12:14:54Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsDeploy]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:54Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsDeploy]: CREATE_COMPLETE state changed >2018-09-21 12:14:54Z [overcloud.AllNodesDeploySteps.CephStorageHostPrepDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:54Z [overcloud.AllNodesDeploySteps.ControllerHostPrepDeployment]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:54Z [overcloud.AllNodesDeploySteps.CephStoragePreConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:54Z [overcloud.AllNodesDeploySteps.ComputeArtifactsDeploy.0]: CREATE_COMPLETE state changed >2018-09-21 12:14:54Z [overcloud.AllNodesDeploySteps.CephStoragePreConfig]: CREATE_COMPLETE state changed >2018-09-21 12:14:54Z [overcloud.AllNodesDeploySteps.ComputeArtifactsDeploy]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:14:55Z [overcloud.AllNodesDeploySteps.ComputeArtifactsDeploy]: CREATE_COMPLETE state changed >2018-09-21 12:14:55Z [overcloud.AllNodesDeploySteps.ControllerHostPrepDeployment]: CREATE_COMPLETE state changed >2018-09-21 12:14:55Z [overcloud.AllNodesDeploySteps.ControllerPreConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:55Z [overcloud.AllNodesDeploySteps.ControllerPreConfig]: CREATE_COMPLETE state changed >2018-09-21 12:14:55Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step1]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:55Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step1]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:55Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step1]: CREATE_COMPLETE state changed >2018-09-21 12:14:55Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step1]: CREATE_COMPLETE state changed >2018-09-21 12:14:55Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step1]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:55Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step1]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:55Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step1]: CREATE_COMPLETE state changed >2018-09-21 12:14:55Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step1]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:55Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step1]: CREATE_COMPLETE state changed >2018-09-21 12:14:56Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step1]: CREATE_COMPLETE state changed >2018-09-21 12:14:56Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step2]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:56Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step2]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:56Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step2]: CREATE_COMPLETE state changed >2018-09-21 12:14:56Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step2]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:56Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step2]: CREATE_COMPLETE state changed >2018-09-21 12:14:56Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step2]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:56Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step2]: CREATE_COMPLETE state changed >2018-09-21 12:14:56Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step2]: CREATE_COMPLETE state changed >2018-09-21 12:14:56Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step2]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:56Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step2]: CREATE_COMPLETE state changed >2018-09-21 12:14:56Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step3]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:56Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step3]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:56Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step3]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:56Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step3]: CREATE_COMPLETE state changed >2018-09-21 12:14:56Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step3]: CREATE_COMPLETE state changed >2018-09-21 12:14:57Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step3]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:57Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step3]: CREATE_COMPLETE state changed >2018-09-21 12:14:57Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step3]: CREATE_COMPLETE state changed >2018-09-21 12:14:57Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step3]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:57Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step3]: CREATE_COMPLETE state changed >2018-09-21 12:14:57Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:57Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step4]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:57Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4]: CREATE_COMPLETE state changed >2018-09-21 12:14:57Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step4]: CREATE_COMPLETE state changed >2018-09-21 12:14:57Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step4]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:57Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step4]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:57Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step4]: CREATE_COMPLETE state changed >2018-09-21 12:14:57Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step4]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:57Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step4]: CREATE_COMPLETE state changed >2018-09-21 12:14:57Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step4]: CREATE_COMPLETE state changed >2018-09-21 12:14:57Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step5]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:58Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step5]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:58Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step5]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:58Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step5]: CREATE_COMPLETE state changed >2018-09-21 12:14:58Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step5]: CREATE_COMPLETE state changed >2018-09-21 12:14:58Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step5]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:58Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step5]: CREATE_COMPLETE state changed >2018-09-21 12:14:58Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step5]: CREATE_COMPLETE state changed >2018-09-21 12:14:58Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:58Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5]: CREATE_COMPLETE state changed >2018-09-21 12:14:58Z [overcloud.AllNodesDeploySteps.CephStorageExtraConfigPost]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:58Z [overcloud.AllNodesDeploySteps.ControllerExtraConfigPost]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:58Z [overcloud.AllNodesDeploySteps.BlockStorageExtraConfigPost]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:58Z [overcloud.AllNodesDeploySteps.ComputeExtraConfigPost]: CREATE_IN_PROGRESS state changed >2018-09-21 12:14:58Z [overcloud.AllNodesDeploySteps.ObjectStorageExtraConfigPost]: CREATE_IN_PROGRESS state changed >2018-09-21 12:15:00Z [overcloud.AllNodesDeploySteps.ControllerExtraConfigPost]: CREATE_COMPLETE state changed >2018-09-21 12:15:00Z [overcloud.AllNodesDeploySteps.BlockStorageExtraConfigPost]: CREATE_COMPLETE state changed >2018-09-21 12:15:01Z [overcloud.AllNodesDeploySteps.ComputeExtraConfigPost]: CREATE_COMPLETE state changed >2018-09-21 12:15:01Z [overcloud.AllNodesDeploySteps.CephStorageExtraConfigPost]: CREATE_COMPLETE state changed >2018-09-21 12:15:02Z [overcloud.AllNodesDeploySteps.ObjectStorageExtraConfigPost]: CREATE_COMPLETE state changed >2018-09-21 12:15:02Z [overcloud.AllNodesDeploySteps.CephStoragePostConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:15:02Z [overcloud.AllNodesDeploySteps.CephStoragePostConfig]: CREATE_COMPLETE state changed >2018-09-21 12:15:02Z [overcloud.AllNodesDeploySteps.ComputePostConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:15:02Z [overcloud.AllNodesDeploySteps.ObjectStoragePostConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:15:02Z [overcloud.AllNodesDeploySteps.BlockStoragePostConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:15:02Z [overcloud.AllNodesDeploySteps.ControllerPostConfig]: CREATE_IN_PROGRESS state changed >2018-09-21 12:15:02Z [overcloud.AllNodesDeploySteps.ComputePostConfig]: CREATE_COMPLETE state changed >2018-09-21 12:15:02Z [overcloud.AllNodesDeploySteps.BlockStoragePostConfig]: CREATE_COMPLETE state changed >2018-09-21 12:15:02Z [overcloud.AllNodesDeploySteps.ObjectStoragePostConfig]: CREATE_COMPLETE sGenerating public/private rsa key pair. >Your identification has been saved in /tmp/tmpMBjlUT/id_rsa. >Your public key has been saved in /tmp/tmpMBjlUT/id_rsa.pub. >The key fingerprint is: >SHA256:GgBMkKEuVjoMNKDstiOwxWbcY7ivvhUfEWpPNpvbhTk TripleO split stack short term key >The key's randomart image is: >+---[RSA 4096]----+ >|=Oo . | >|* o. . . | >|+. .+ = | >|=oo+ = = o | >|oOB = * E . | >|==oo + * o | >|oo. . + . | >|. .o | >| .+o. | >+----[SHA256]-----+ >Warning: Permanently added '192.168.24.18' (ECDSA) to the list of known hosts. >Warning: Permanently added '192.168.24.8' (ECDSA) to the list of known hosts. >Warning: Permanently added '192.168.24.6' (ECDSA) to the list of known hosts. >Warning: Permanently added '192.168.24.18' (ECDSA) to the list of known hosts. >Warning: Permanently added '192.168.24.8' (ECDSA) to the list of known hosts. >Warning: Permanently added '192.168.24.6' (ECDSA) to the list of known hosts. >tate changed >2018-09-21 12:15:02Z [overcloud.AllNodesDeploySteps.ControllerPostConfig]: CREATE_COMPLETE state changed >2018-09-21 12:15:02Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE Stack CREATE completed successfully >2018-09-21 12:15:03Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE state changed >2018-09-21 12:15:03Z [overcloud]: CREATE_COMPLETE Stack CREATE completed successfully > > Stack overcloud/5335416b-3e8a-4b65-93f3-c84c558d46eb CREATE_COMPLETE > >Deploying overcloud configuration >Enabling ssh admin (tripleo-admin) for hosts: >192.168.24.18 192.168.24.8 192.168.24.6 >Using ssh user heat-admin for initial connection. >Using ssh key at /home/stack/.ssh/id_rsa for initial connection. >Inserting TripleO short term key for 192.168.24.18 >Inserting TripleO short term key for 192.168.24.8 >Inserting TripleO short term key for 192.168.24.6 >Starting ssh admin enablement workflow >ssh admin enablement workflow - RUNNING. >ssh admin enablement workflow - RUNNING. >ssh admin enablement workflow - COMPLETE. >Removing TripleO short term key from 192.168.24.18 >Removing TripleO short term key from 192.168.24.8 >Removing TripleO short term key from 192.168.24.6 >Removing short term keys locally >Enabling ssh admin - COMPLETE. >Config downloaded at /var/lib/mistral/overcloud >Inventory generated at /var/lib/mistral/overcloud/tripleo-ansible-inventory.yaml >Running ansible playbook at /var/lib/mistral/overcloud/deploy_steps_playbook.yaml. See log file at /var/lib/mistral/overcloud/ansible.log for progress. ... > >Using /var/lib/mistral/overcloud/ansible.cfg as config file > >PLAY [Gather facts from undercloud] ******************************************** > >TASK [Gathering Facts] ********************************************************* >Friday 21 September 2018 08:16:37 -0400 (0:00:00.111) 0:00:00.111 ****** >ok: [undercloud] > >PLAY [Gather facts from overcloud] ********************************************* > >TASK [Gathering Facts] ********************************************************* >Friday 21 September 2018 08:16:51 -0400 (0:00:13.867) 0:00:13.978 ****** >ok: [compute-0] >ok: [controller-0] >ok: [ceph-0] > >PLAY [Load global variables] *************************************************** > >TASK [include_vars] ************************************************************ >Friday 21 September 2018 08:16:55 -0400 (0:00:04.244) 0:00:18.223 ****** >ok: [undercloud] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "[172.17.3.11]*,[ceph-0.localdomain]*,[ceph-0]*,[172.17.3.11]*,[ceph-0.storage.localdomain]*,[ceph-0.storage]*,[172.17.4.15]*,[ceph-0.storagemgmt.localdomain]*,[ceph-0.storagemgmt]*,[192.168.24.6]*,[ceph-0.internalapi.localdomain]*,[ceph-0.internalapi]*,[192.168.24.6]*,[ceph-0.tenant.localdomain]*,[ceph-0.tenant]*,[192.168.24.6]*,[ceph-0.external.localdomain]*,[ceph-0.external]*,[192.168.24.6]*,[ceph-0.management.localdomain]*,[ceph-0.management]*,[192.168.24.6]*,[ceph-0.ctlplane.localdomain]*,[ceph-0.ctlplane]*", "compute-0": "[172.17.1.12]*,[compute-0.localdomain]*,[compute-0]*,[172.17.3.10]*,[compute-0.storage.localdomain]*,[compute-0.storage]*,[192.168.24.8]*,[compute-0.storagemgmt.localdomain]*,[compute-0.storagemgmt]*,[172.17.1.12]*,[compute-0.internalapi.localdomain]*,[compute-0.internalapi]*,[172.17.2.21]*,[compute-0.tenant.localdomain]*,[compute-0.tenant]*,[192.168.24.8]*,[compute-0.external.localdomain]*,[compute-0.external]*,[192.168.24.8]*,[compute-0.management.localdomain]*,[compute-0.management]*,[192.168.24.8]*,[compute-0.ctlplane.localdomain]*,[compute-0.ctlplane]*", "controller-0": "[172.17.1.17]*,[controller-0.localdomain]*,[controller-0]*,[172.17.3.16]*,[controller-0.storage.localdomain]*,[controller-0.storage]*,[172.17.4.14]*,[controller-0.storagemgmt.localdomain]*,[controller-0.storagemgmt]*,[172.17.1.17]*,[controller-0.internalapi.localdomain]*,[controller-0.internalapi]*,[172.17.2.22]*,[controller-0.tenant.localdomain]*,[controller-0.tenant]*,[10.0.0.117]*,[controller-0.external.localdomain]*,[controller-0.external]*,[192.168.24.18]*,[controller-0.management.localdomain]*,[controller-0.management]*,[192.168.24.18]*,[controller-0.ctlplane.localdomain]*,[controller-0.ctlplane]*"}}, "ansible_included_var_files": ["/var/lib/mistral/overcloud/global_vars.yaml"], "changed": false} >ok: [controller-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "[172.17.3.11]*,[ceph-0.localdomain]*,[ceph-0]*,[172.17.3.11]*,[ceph-0.storage.localdomain]*,[ceph-0.storage]*,[172.17.4.15]*,[ceph-0.storagemgmt.localdomain]*,[ceph-0.storagemgmt]*,[192.168.24.6]*,[ceph-0.internalapi.localdomain]*,[ceph-0.internalapi]*,[192.168.24.6]*,[ceph-0.tenant.localdomain]*,[ceph-0.tenant]*,[192.168.24.6]*,[ceph-0.external.localdomain]*,[ceph-0.external]*,[192.168.24.6]*,[ceph-0.management.localdomain]*,[ceph-0.management]*,[192.168.24.6]*,[ceph-0.ctlplane.localdomain]*,[ceph-0.ctlplane]*", "compute-0": "[172.17.1.12]*,[compute-0.localdomain]*,[compute-0]*,[172.17.3.10]*,[compute-0.storage.localdomain]*,[compute-0.storage]*,[192.168.24.8]*,[compute-0.storagemgmt.localdomain]*,[compute-0.storagemgmt]*,[172.17.1.12]*,[compute-0.internalapi.localdomain]*,[compute-0.internalapi]*,[172.17.2.21]*,[compute-0.tenant.localdomain]*,[compute-0.tenant]*,[192.168.24.8]*,[compute-0.external.localdomain]*,[compute-0.external]*,[192.168.24.8]*,[compute-0.management.localdomain]*,[compute-0.management]*,[192.168.24.8]*,[compute-0.ctlplane.localdomain]*,[compute-0.ctlplane]*", "controller-0": "[172.17.1.17]*,[controller-0.localdomain]*,[controller-0]*,[172.17.3.16]*,[controller-0.storage.localdomain]*,[controller-0.storage]*,[172.17.4.14]*,[controller-0.storagemgmt.localdomain]*,[controller-0.storagemgmt]*,[172.17.1.17]*,[controller-0.internalapi.localdomain]*,[controller-0.internalapi]*,[172.17.2.22]*,[controller-0.tenant.localdomain]*,[controller-0.tenant]*,[10.0.0.117]*,[controller-0.external.localdomain]*,[controller-0.external]*,[192.168.24.18]*,[controller-0.management.localdomain]*,[controller-0.management]*,[192.168.24.18]*,[controller-0.ctlplane.localdomain]*,[controller-0.ctlplane]*"}}, "ansible_included_var_files": ["/var/lib/mistral/overcloud/global_vars.yaml"], "changed": false} >ok: [compute-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "[172.17.3.11]*,[ceph-0.localdomain]*,[ceph-0]*,[172.17.3.11]*,[ceph-0.storage.localdomain]*,[ceph-0.storage]*,[172.17.4.15]*,[ceph-0.storagemgmt.localdomain]*,[ceph-0.storagemgmt]*,[192.168.24.6]*,[ceph-0.internalapi.localdomain]*,[ceph-0.internalapi]*,[192.168.24.6]*,[ceph-0.tenant.localdomain]*,[ceph-0.tenant]*,[192.168.24.6]*,[ceph-0.external.localdomain]*,[ceph-0.external]*,[192.168.24.6]*,[ceph-0.management.localdomain]*,[ceph-0.management]*,[192.168.24.6]*,[ceph-0.ctlplane.localdomain]*,[ceph-0.ctlplane]*", "compute-0": "[172.17.1.12]*,[compute-0.localdomain]*,[compute-0]*,[172.17.3.10]*,[compute-0.storage.localdomain]*,[compute-0.storage]*,[192.168.24.8]*,[compute-0.storagemgmt.localdomain]*,[compute-0.storagemgmt]*,[172.17.1.12]*,[compute-0.internalapi.localdomain]*,[compute-0.internalapi]*,[172.17.2.21]*,[compute-0.tenant.localdomain]*,[compute-0.tenant]*,[192.168.24.8]*,[compute-0.external.localdomain]*,[compute-0.external]*,[192.168.24.8]*,[compute-0.management.localdomain]*,[compute-0.management]*,[192.168.24.8]*,[compute-0.ctlplane.localdomain]*,[compute-0.ctlplane]*", "controller-0": "[172.17.1.17]*,[controller-0.localdomain]*,[controller-0]*,[172.17.3.16]*,[controller-0.storage.localdomain]*,[controller-0.storage]*,[172.17.4.14]*,[controller-0.storagemgmt.localdomain]*,[controller-0.storagemgmt]*,[172.17.1.17]*,[controller-0.internalapi.localdomain]*,[controller-0.internalapi]*,[172.17.2.22]*,[controller-0.tenant.localdomain]*,[controller-0.tenant]*,[10.0.0.117]*,[controller-0.external.localdomain]*,[controller-0.external]*,[192.168.24.18]*,[controller-0.management.localdomain]*,[controller-0.management]*,[192.168.24.18]*,[controller-0.ctlplane.localdomain]*,[controller-0.ctlplane]*"}}, "ansible_included_var_files": ["/var/lib/mistral/overcloud/global_vars.yaml"], "changed": false} >ok: [ceph-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "[172.17.3.11]*,[ceph-0.localdomain]*,[ceph-0]*,[172.17.3.11]*,[ceph-0.storage.localdomain]*,[ceph-0.storage]*,[172.17.4.15]*,[ceph-0.storagemgmt.localdomain]*,[ceph-0.storagemgmt]*,[192.168.24.6]*,[ceph-0.internalapi.localdomain]*,[ceph-0.internalapi]*,[192.168.24.6]*,[ceph-0.tenant.localdomain]*,[ceph-0.tenant]*,[192.168.24.6]*,[ceph-0.external.localdomain]*,[ceph-0.external]*,[192.168.24.6]*,[ceph-0.management.localdomain]*,[ceph-0.management]*,[192.168.24.6]*,[ceph-0.ctlplane.localdomain]*,[ceph-0.ctlplane]*", "compute-0": "[172.17.1.12]*,[compute-0.localdomain]*,[compute-0]*,[172.17.3.10]*,[compute-0.storage.localdomain]*,[compute-0.storage]*,[192.168.24.8]*,[compute-0.storagemgmt.localdomain]*,[compute-0.storagemgmt]*,[172.17.1.12]*,[compute-0.internalapi.localdomain]*,[compute-0.internalapi]*,[172.17.2.21]*,[compute-0.tenant.localdomain]*,[compute-0.tenant]*,[192.168.24.8]*,[compute-0.external.localdomain]*,[compute-0.external]*,[192.168.24.8]*,[compute-0.management.localdomain]*,[compute-0.management]*,[192.168.24.8]*,[compute-0.ctlplane.localdomain]*,[compute-0.ctlplane]*", "controller-0": "[172.17.1.17]*,[controller-0.localdomain]*,[controller-0]*,[172.17.3.16]*,[controller-0.storage.localdomain]*,[controller-0.storage]*,[172.17.4.14]*,[controller-0.storagemgmt.localdomain]*,[controller-0.storagemgmt]*,[172.17.1.17]*,[controller-0.internalapi.localdomain]*,[controller-0.internalapi]*,[172.17.2.22]*,[controller-0.tenant.localdomain]*,[controller-0.tenant]*,[10.0.0.117]*,[controller-0.external.localdomain]*,[controller-0.external]*,[192.168.24.18]*,[controller-0.management.localdomain]*,[controller-0.management]*,[192.168.24.18]*,[controller-0.ctlplane.localdomain]*,[controller-0.ctlplane]*"}}, "ansible_included_var_files": ["/var/lib/mistral/overcloud/global_vars.yaml"], "changed": false} > >PLAY [Common roles for TripleO servers] **************************************** > >TASK [tripleo-bootstrap : Deploy required packages to bootstrap TripleO] ******* >Friday 21 September 2018 08:16:55 -0400 (0:00:00.206) 0:00:18.429 ****** >ok: [compute-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.7.1-0.20180907213355.476aae2.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >ok: [ceph-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.7.1-0.20180907213355.476aae2.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >ok: [controller-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.7.1-0.20180907213355.476aae2.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} > >TASK [tripleo-bootstrap : Create /var/lib/heat-config/tripleo-config-download directory for deployment data] *** >Friday 21 September 2018 08:16:56 -0400 (0:00:00.908) 0:00:19.337 ****** >changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [tripleo-ssh-known-hosts : Add hosts key in /etc/ssh/ssh_known_hosts for live/cold-migration] *** >Friday 21 September 2018 08:16:57 -0400 (0:00:00.406) 0:00:19.743 ****** >changed: [ceph-0] => (item=compute-0) => {"backup": "", "changed": true, "item": "compute-0", "msg": "line added"} >changed: [controller-0] => (item=compute-0) => {"backup": "", "changed": true, "item": "compute-0", "msg": "line added"} >changed: [compute-0] => (item=compute-0) => {"backup": "", "changed": true, "item": "compute-0", "msg": "line added"} >changed: [compute-0] => (item=controller-0) => {"backup": "", "changed": true, "item": "controller-0", "msg": "line added"} >changed: [ceph-0] => (item=controller-0) => {"backup": "", "changed": true, "item": "controller-0", "msg": "line added"} >changed: [controller-0] => (item=controller-0) => {"backup": "", "changed": true, "item": "controller-0", "msg": "line added"} >changed: [compute-0] => (item=ceph-0) => {"backup": "", "changed": true, "item": "ceph-0", "msg": "line added"} >changed: [controller-0] => (item=ceph-0) => {"backup": "", "changed": true, "item": "ceph-0", "msg": "line added"} >changed: [ceph-0] => (item=ceph-0) => {"backup": "", "changed": true, "item": "ceph-0", "msg": "line added"} > >PLAY [Overcloud deploy step tasks for step 0] ********************************** > >PLAY [Server deployments] ****************************************************** > >TASK [include_tasks] *********************************************************** >Friday 21 September 2018 08:16:58 -0400 (0:00:00.959) 0:00:20.703 ****** >included: /var/lib/mistral/overcloud/deployments.yaml for controller-0, compute-0, ceph-0 >included: /var/lib/mistral/overcloud/deployments.yaml for controller-0 >included: /var/lib/mistral/overcloud/deployments.yaml for controller-0, compute-0, ceph-0 >included: /var/lib/mistral/overcloud/deployments.yaml for controller-0 >included: /var/lib/mistral/overcloud/deployments.yaml for controller-0 >included: /var/lib/mistral/overcloud/deployments.yaml for controller-0 >included: /var/lib/mistral/overcloud/deployments.yaml for controller-0 >included: /var/lib/mistral/overcloud/deployments.yaml for controller-0 >included: /var/lib/mistral/overcloud/deployments.yaml for controller-0 >included: /var/lib/mistral/overcloud/deployments.yaml for compute-0 >included: /var/lib/mistral/overcloud/deployments.yaml for compute-0 >included: /var/lib/mistral/overcloud/deployments.yaml for compute-0 >included: /var/lib/mistral/overcloud/deployments.yaml for compute-0 >included: /var/lib/mistral/overcloud/deployments.yaml for compute-0 >included: /var/lib/mistral/overcloud/deployments.yaml for compute-0 >included: /var/lib/mistral/overcloud/deployments.yaml for compute-0 >included: /var/lib/mistral/overcloud/deployments.yaml for ceph-0 >included: /var/lib/mistral/overcloud/deployments.yaml for ceph-0 >included: /var/lib/mistral/overcloud/deployments.yaml for ceph-0 >included: /var/lib/mistral/overcloud/deployments.yaml for ceph-0 >included: /var/lib/mistral/overcloud/deployments.yaml for ceph-0 >included: /var/lib/mistral/overcloud/deployments.yaml for ceph-0 >included: /var/lib/mistral/overcloud/deployments.yaml for ceph-0 > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:16:59 -0400 (0:00:00.889) 0:00:21.592 ****** >ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "95c736a3-01d9-4926-a01f-973b9789b07f"}, "changed": false} >ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "ef1b7865-3d5e-4e63-885e-e798696a27d3"}, "changed": false} >ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "e633e879-7a65-4081-ad7b-bf66c7035600"}, "changed": false} > >TASK [Render deployment file for NetworkDeployment] **************************** >Friday 21 September 2018 08:16:59 -0400 (0:00:00.151) 0:00:21.744 ****** >changed: [compute-0] => {"changed": true, "checksum": "b0c8f0250ffcc26178fc47d8d923e001991ca5de", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-ef1b7865-3d5e-4e63-885e-e798696a27d3", "gid": 0, "group": "root", "md5sum": "a1dbe923ec9125fbd85d60e6cbf5cef6", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 9256, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532219.36-7952391070164/source", "state": "file", "uid": 0} >changed: [ceph-0] => {"changed": true, "checksum": "4a3c495a0444023b0004cd49cab1a3078efb5bd5", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-e633e879-7a65-4081-ad7b-bf66c7035600", "gid": 0, "group": "root", "md5sum": "b1ebd98e61671bce2035d3673a87ae24", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 8774, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532219.38-13881183274114/source", "state": "file", "uid": 0} >changed: [controller-0] => {"changed": true, "checksum": "79993c6c12a05c966536ef673f39051da1fa145d", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-95c736a3-01d9-4926-a01f-973b9789b07f", "gid": 0, "group": "root", "md5sum": "419117fd4db47bd71542e081a7219622", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 10198, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532219.33-12146189490025/source", "state": "file", "uid": 0} > >TASK [Check if deployed file exists for NetworkDeployment] ********************* >Friday 21 September 2018 08:17:00 -0400 (0:00:00.845) 0:00:22.590 ****** >ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for NetworkDeployment] ********************** >Friday 21 September 2018 08:17:00 -0400 (0:00:00.293) 0:00:22.883 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for NetworkDeployment when previous deployment failed] *** >Friday 21 September 2018 08:17:00 -0400 (0:00:00.099) 0:00:22.982 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for NetworkDeployment] ************************ >Friday 21 September 2018 08:17:00 -0400 (0:00:00.100) 0:00:23.083 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment NetworkDeployment] **************************************** >Friday 21 September 2018 08:17:00 -0400 (0:00:00.117) 0:00:23.201 ****** >changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/95c736a3-01d9-4926-a01f-973b9789b07f.notify.json)", "delta": "0:00:00.029327", "end": "2018-09-21 08:17:12.578255", "rc": 0, "start": "2018-09-21 08:17:12.548928", "stderr": "[2018-09-21 08:17:12,573] (heat-config) [WARNING] Skipping config 95c736a3-01d9-4926-a01f-973b9789b07f, already deployed\n[2018-09-21 08:17:12,573] (heat-config) [WARNING] To force-deploy, rm /var/lib/heat-config/deployed/95c736a3-01d9-4926-a01f-973b9789b07f.json\njq: /var/lib/heat-config/deployed/95c736a3-01d9-4926-a01f-973b9789b07f.notify.json: No such file or directory", "stderr_lines": ["[2018-09-21 08:17:12,573] (heat-config) [WARNING] Skipping config 95c736a3-01d9-4926-a01f-973b9789b07f, already deployed", "[2018-09-21 08:17:12,573] (heat-config) [WARNING] To force-deploy, rm /var/lib/heat-config/deployed/95c736a3-01d9-4926-a01f-973b9789b07f.json", "jq: /var/lib/heat-config/deployed/95c736a3-01d9-4926-a01f-973b9789b07f.notify.json: No such file or directory"], "stdout": "", "stdout_lines": []} > >changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/e633e879-7a65-4081-ad7b-bf66c7035600.notify.json)", "delta": "0:00:15.684724", "end": "2018-09-21 08:17:15.735902", "rc": 0, "start": "2018-09-21 08:17:00.051178", "stderr": "[2018-09-21 08:17:00,081] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/e633e879-7a65-4081-ad7b-bf66c7035600.json\n[2018-09-21 08:17:15,303] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.6/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.11/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.6/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.11/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/09/21 08:17:00 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/09/21 08:17:00 AM] [INFO] Ifcfg net config provider created.\\n[2018/09/21 08:17:00 AM] [INFO] Not using any mapping file.\\n[2018/09/21 08:17:00 AM] [INFO] Finding active nics\\n[2018/09/21 08:17:00 AM] [INFO] eth2 is an embedded active nic\\n[2018/09/21 08:17:00 AM] [INFO] eth1 is an embedded active nic\\n[2018/09/21 08:17:00 AM] [INFO] eth0 is an embedded active nic\\n[2018/09/21 08:17:00 AM] [INFO] lo is not an active nic\\n[2018/09/21 08:17:00 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/09/21 08:17:00 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/09/21 08:17:00 AM] [INFO] nic3 mapped to: eth2\\n[2018/09/21 08:17:00 AM] [INFO] nic2 mapped to: eth1\\n[2018/09/21 08:17:00 AM] [INFO] nic1 mapped to: eth0\\n[2018/09/21 08:17:00 AM] [INFO] adding interface: eth0\\n[2018/09/21 08:17:00 AM] [INFO] adding custom route for interface: eth0\\n[2018/09/21 08:17:00 AM] [INFO] adding bridge: br-isolated\\n[2018/09/21 08:17:00 AM] [INFO] adding interface: eth1\\n[2018/09/21 08:17:00 AM] [INFO] adding vlan: vlan30\\n[2018/09/21 08:17:00 AM] [INFO] adding vlan: vlan40\\n[2018/09/21 08:17:00 AM] [INFO] applying network configs...\\n[2018/09/21 08:17:00 AM] [INFO] running ifdown on interface: vlan30\\n[2018/09/21 08:17:00 AM] [INFO] running ifdown on interface: vlan40\\n[2018/09/21 08:17:00 AM] [INFO] running ifdown on interface: eth1\\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: eth0\\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan30\\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan40\\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/09/21 08:17:01 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/09/21 08:17:01 AM] [INFO] running ifup on interface: eth1\\n[2018/09/21 08:17:01 AM] [INFO] running ifup on interface: eth0\\n[2018/09/21 08:17:06 AM] [INFO] running ifup on interface: vlan30\\n[2018/09/21 08:17:10 AM] [INFO] running ifup on interface: vlan40\\n[2018/09/21 08:17:14 AM] [INFO] running ifup on interface: vlan30\\n[2018/09/21 08:17:14 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-09-21 08:17:15,303] (heat-config) [DEBUG] [2018-09-21 08:17:00,107] (heat-config) [INFO] interface_name=nic1\n[2018-09-21 08:17:00,108] (heat-config) [INFO] bridge_name=br-ex\n[2018-09-21 08:17:00,108] (heat-config) [INFO] deploy_server_id=f4c5361d-7430-47e6-b3a8-908850a79a79\n[2018-09-21 08:17:00,108] (heat-config) [INFO] deploy_action=CREATE\n[2018-09-21 08:17:00,108] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-bre5qklthc6f-0-gxpifqih23ze-NetworkDeployment-263ctk3zuvkd-TripleOSoftwareDeployment-6xum27qyfsh4/a21c346e-502b-4711-aaa0-a66c11d2513e\n[2018-09-21 08:17:00,108] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-09-21 08:17:00,108] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-09-21 08:17:00,108] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/e633e879-7a65-4081-ad7b-bf66c7035600\n[2018-09-21 08:17:15,299] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS\n\n[2018-09-21 08:17:15,299] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.6/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.11/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.6/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.11/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/09/21 08:17:00 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/09/21 08:17:00 AM] [INFO] Ifcfg net config provider created.\n[2018/09/21 08:17:00 AM] [INFO] Not using any mapping file.\n[2018/09/21 08:17:00 AM] [INFO] Finding active nics\n[2018/09/21 08:17:00 AM] [INFO] eth2 is an embedded active nic\n[2018/09/21 08:17:00 AM] [INFO] eth1 is an embedded active nic\n[2018/09/21 08:17:00 AM] [INFO] eth0 is an embedded active nic\n[2018/09/21 08:17:00 AM] [INFO] lo is not an active nic\n[2018/09/21 08:17:00 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/09/21 08:17:00 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/09/21 08:17:00 AM] [INFO] nic3 mapped to: eth2\n[2018/09/21 08:17:00 AM] [INFO] nic2 mapped to: eth1\n[2018/09/21 08:17:00 AM] [INFO] nic1 mapped to: eth0\n[2018/09/21 08:17:00 AM] [INFO] adding interface: eth0\n[2018/09/21 08:17:00 AM] [INFO] adding custom route for interface: eth0\n[2018/09/21 08:17:00 AM] [INFO] adding bridge: br-isolated\n[2018/09/21 08:17:00 AM] [INFO] adding interface: eth1\n[2018/09/21 08:17:00 AM] [INFO] adding vlan: vlan30\n[2018/09/21 08:17:00 AM] [INFO] adding vlan: vlan40\n[2018/09/21 08:17:00 AM] [INFO] applying network configs...\n[2018/09/21 08:17:00 AM] [INFO] running ifdown on interface: vlan30\n[2018/09/21 08:17:00 AM] [INFO] running ifdown on interface: vlan40\n[2018/09/21 08:17:00 AM] [INFO] running ifdown on interface: eth1\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: eth0\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan30\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan40\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/09/21 08:17:01 AM] [INFO] running ifup on bridge: br-isolated\n[2018/09/21 08:17:01 AM] [INFO] running ifup on interface: eth1\n[2018/09/21 08:17:01 AM] [INFO] running ifup on interface: eth0\n[2018/09/21 08:17:06 AM] [INFO] running ifup on interface: vlan30\n[2018/09/21 08:17:10 AM] [INFO] running ifup on interface: vlan40\n[2018/09/21 08:17:14 AM] [INFO] running ifup on interface: vlan30\n[2018/09/21 08:17:14 AM] [INFO] running ifup on interface: vlan40\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.2\n++ '[' -n 192.168.24.2 ']'\n++ break\n++ echo 192.168.24.2\n+ local METADATA_IP=192.168.24.2\n+ '[' -n 192.168.24.2 ']'\n+ is_local_ip 192.168.24.2\n+ local IP_TO_CHECK=192.168.24.2\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.2/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\n+ _ping=ping\n+ [[ 192.168.24.2 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.2\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-09-21 08:17:15,300] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/e633e879-7a65-4081-ad7b-bf66c7035600\n\n[2018-09-21 08:17:15,304] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-09-21 08:17:15,304] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/e633e879-7a65-4081-ad7b-bf66c7035600.json < /var/lib/heat-config/deployed/e633e879-7a65-4081-ad7b-bf66c7035600.notify.json\n[2018-09-21 08:17:15,729] (heat-config) [INFO] \n[2018-09-21 08:17:15,729] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:17:00,081] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/e633e879-7a65-4081-ad7b-bf66c7035600.json", "[2018-09-21 08:17:15,303] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.6/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.11/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.6/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.11/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/09/21 08:17:00 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/09/21 08:17:00 AM] [INFO] Ifcfg net config provider created.\\n[2018/09/21 08:17:00 AM] [INFO] Not using any mapping file.\\n[2018/09/21 08:17:00 AM] [INFO] Finding active nics\\n[2018/09/21 08:17:00 AM] [INFO] eth2 is an embedded active nic\\n[2018/09/21 08:17:00 AM] [INFO] eth1 is an embedded active nic\\n[2018/09/21 08:17:00 AM] [INFO] eth0 is an embedded active nic\\n[2018/09/21 08:17:00 AM] [INFO] lo is not an active nic\\n[2018/09/21 08:17:00 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/09/21 08:17:00 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/09/21 08:17:00 AM] [INFO] nic3 mapped to: eth2\\n[2018/09/21 08:17:00 AM] [INFO] nic2 mapped to: eth1\\n[2018/09/21 08:17:00 AM] [INFO] nic1 mapped to: eth0\\n[2018/09/21 08:17:00 AM] [INFO] adding interface: eth0\\n[2018/09/21 08:17:00 AM] [INFO] adding custom route for interface: eth0\\n[2018/09/21 08:17:00 AM] [INFO] adding bridge: br-isolated\\n[2018/09/21 08:17:00 AM] [INFO] adding interface: eth1\\n[2018/09/21 08:17:00 AM] [INFO] adding vlan: vlan30\\n[2018/09/21 08:17:00 AM] [INFO] adding vlan: vlan40\\n[2018/09/21 08:17:00 AM] [INFO] applying network configs...\\n[2018/09/21 08:17:00 AM] [INFO] running ifdown on interface: vlan30\\n[2018/09/21 08:17:00 AM] [INFO] running ifdown on interface: vlan40\\n[2018/09/21 08:17:00 AM] [INFO] running ifdown on interface: eth1\\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: eth0\\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan30\\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan40\\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/09/21 08:17:01 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/09/21 08:17:01 AM] [INFO] running ifup on interface: eth1\\n[2018/09/21 08:17:01 AM] [INFO] running ifup on interface: eth0\\n[2018/09/21 08:17:06 AM] [INFO] running ifup on interface: vlan30\\n[2018/09/21 08:17:10 AM] [INFO] running ifup on interface: vlan40\\n[2018/09/21 08:17:14 AM] [INFO] running ifup on interface: vlan30\\n[2018/09/21 08:17:14 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-09-21 08:17:15,303] (heat-config) [DEBUG] [2018-09-21 08:17:00,107] (heat-config) [INFO] interface_name=nic1", "[2018-09-21 08:17:00,108] (heat-config) [INFO] bridge_name=br-ex", "[2018-09-21 08:17:00,108] (heat-config) [INFO] deploy_server_id=f4c5361d-7430-47e6-b3a8-908850a79a79", "[2018-09-21 08:17:00,108] (heat-config) [INFO] deploy_action=CREATE", "[2018-09-21 08:17:00,108] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-bre5qklthc6f-0-gxpifqih23ze-NetworkDeployment-263ctk3zuvkd-TripleOSoftwareDeployment-6xum27qyfsh4/a21c346e-502b-4711-aaa0-a66c11d2513e", "[2018-09-21 08:17:00,108] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-09-21 08:17:00,108] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-09-21 08:17:00,108] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/e633e879-7a65-4081-ad7b-bf66c7035600", "[2018-09-21 08:17:15,299] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS", "", "[2018-09-21 08:17:15,299] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.6/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.11/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.6/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.11/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/09/21 08:17:00 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/09/21 08:17:00 AM] [INFO] Ifcfg net config provider created.", "[2018/09/21 08:17:00 AM] [INFO] Not using any mapping file.", "[2018/09/21 08:17:00 AM] [INFO] Finding active nics", "[2018/09/21 08:17:00 AM] [INFO] eth2 is an embedded active nic", "[2018/09/21 08:17:00 AM] [INFO] eth1 is an embedded active nic", "[2018/09/21 08:17:00 AM] [INFO] eth0 is an embedded active nic", "[2018/09/21 08:17:00 AM] [INFO] lo is not an active nic", "[2018/09/21 08:17:00 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/09/21 08:17:00 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/09/21 08:17:00 AM] [INFO] nic3 mapped to: eth2", "[2018/09/21 08:17:00 AM] [INFO] nic2 mapped to: eth1", "[2018/09/21 08:17:00 AM] [INFO] nic1 mapped to: eth0", "[2018/09/21 08:17:00 AM] [INFO] adding interface: eth0", "[2018/09/21 08:17:00 AM] [INFO] adding custom route for interface: eth0", "[2018/09/21 08:17:00 AM] [INFO] adding bridge: br-isolated", "[2018/09/21 08:17:00 AM] [INFO] adding interface: eth1", "[2018/09/21 08:17:00 AM] [INFO] adding vlan: vlan30", "[2018/09/21 08:17:00 AM] [INFO] adding vlan: vlan40", "[2018/09/21 08:17:00 AM] [INFO] applying network configs...", "[2018/09/21 08:17:00 AM] [INFO] running ifdown on interface: vlan30", "[2018/09/21 08:17:00 AM] [INFO] running ifdown on interface: vlan40", "[2018/09/21 08:17:00 AM] [INFO] running ifdown on interface: eth1", "[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: eth0", "[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan30", "[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan40", "[2018/09/21 08:17:01 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/09/21 08:17:01 AM] [INFO] running ifup on bridge: br-isolated", "[2018/09/21 08:17:01 AM] [INFO] running ifup on interface: eth1", "[2018/09/21 08:17:01 AM] [INFO] running ifup on interface: eth0", "[2018/09/21 08:17:06 AM] [INFO] running ifup on interface: vlan30", "[2018/09/21 08:17:10 AM] [INFO] running ifup on interface: vlan40", "[2018/09/21 08:17:14 AM] [INFO] running ifup on interface: vlan30", "[2018/09/21 08:17:14 AM] [INFO] running ifup on interface: vlan40", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.2", "++ '[' -n 192.168.24.2 ']'", "++ break", "++ echo 192.168.24.2", "+ local METADATA_IP=192.168.24.2", "+ '[' -n 192.168.24.2 ']'", "+ is_local_ip 192.168.24.2", "+ local IP_TO_CHECK=192.168.24.2", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.2/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.2...'", "+ _ping=ping", "+ [[ 192.168.24.2 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.2", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-09-21 08:17:15,300] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/e633e879-7a65-4081-ad7b-bf66c7035600", "", "[2018-09-21 08:17:15,304] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-09-21 08:17:15,304] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/e633e879-7a65-4081-ad7b-bf66c7035600.json < /var/lib/heat-config/deployed/e633e879-7a65-4081-ad7b-bf66c7035600.notify.json", "[2018-09-21 08:17:15,729] (heat-config) [INFO] ", "[2018-09-21 08:17:15,729] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/ef1b7865-3d5e-4e63-885e-e798696a27d3.notify.json)", "delta": "0:00:20.372846", "end": "2018-09-21 08:17:21.482595", "rc": 0, "start": "2018-09-21 08:17:01.109749", "stderr": "[2018-09-21 08:17:01,141] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/ef1b7865-3d5e-4e63-885e-e798696a27d3.json\n[2018-09-21 08:17:21,025] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/09/21 08:17:01 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/09/21 08:17:01 AM] [INFO] Ifcfg net config provider created.\\n[2018/09/21 08:17:01 AM] [INFO] Not using any mapping file.\\n[2018/09/21 08:17:01 AM] [INFO] Finding active nics\\n[2018/09/21 08:17:01 AM] [INFO] eth2 is an embedded active nic\\n[2018/09/21 08:17:01 AM] [INFO] eth0 is an embedded active nic\\n[2018/09/21 08:17:01 AM] [INFO] eth1 is an embedded active nic\\n[2018/09/21 08:17:01 AM] [INFO] lo is not an active nic\\n[2018/09/21 08:17:01 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/09/21 08:17:01 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/09/21 08:17:01 AM] [INFO] nic3 mapped to: eth2\\n[2018/09/21 08:17:01 AM] [INFO] nic2 mapped to: eth1\\n[2018/09/21 08:17:01 AM] [INFO] nic1 mapped to: eth0\\n[2018/09/21 08:17:01 AM] [INFO] adding interface: eth0\\n[2018/09/21 08:17:01 AM] [INFO] adding custom route for interface: eth0\\n[2018/09/21 08:17:01 AM] [INFO] adding bridge: br-isolated\\n[2018/09/21 08:17:01 AM] [INFO] adding interface: eth1\\n[2018/09/21 08:17:01 AM] [INFO] adding vlan: vlan20\\n[2018/09/21 08:17:01 AM] [INFO] adding vlan: vlan30\\n[2018/09/21 08:17:01 AM] [INFO] adding vlan: vlan50\\n[2018/09/21 08:17:01 AM] [INFO] adding interface: eth2\\n[2018/09/21 08:17:01 AM] [INFO] applying network configs...\\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan20\\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan30\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan50\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: eth2\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: eth1\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: eth0\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan20\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan30\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan50\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/09/21 08:17:02 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/09/21 08:17:02 AM] [INFO] running ifup on interface: eth2\\n[2018/09/21 08:17:02 AM] [INFO] running ifup on interface: eth1\\n[2018/09/21 08:17:03 AM] [INFO] running ifup on interface: eth0\\n[2018/09/21 08:17:07 AM] [INFO] running ifup on interface: vlan20\\n[2018/09/21 08:17:11 AM] [INFO] running ifup on interface: vlan30\\n[2018/09/21 08:17:15 AM] [INFO] running ifup on interface: vlan50\\n[2018/09/21 08:17:20 AM] [INFO] running ifup on interface: vlan20\\n[2018/09/21 08:17:20 AM] [INFO] running ifup on interface: vlan30\\n[2018/09/21 08:17:20 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-09-21 08:17:21,026] (heat-config) [DEBUG] [2018-09-21 08:17:01,173] (heat-config) [INFO] interface_name=nic1\n[2018-09-21 08:17:01,173] (heat-config) [INFO] bridge_name=br-ex\n[2018-09-21 08:17:01,173] (heat-config) [INFO] deploy_server_id=41df08ab-b98d-4a4e-b91d-2da74cba2af6\n[2018-09-21 08:17:01,173] (heat-config) [INFO] deploy_action=CREATE\n[2018-09-21 08:17:01,173] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-vwvs6vnygabp-0-3khh4rv6h5h3-NetworkDeployment-rdzsxqqht5w2-TripleOSoftwareDeployment-sk2ep3fyeybk/a94d7fe2-1a34-492b-8777-37e7b5e5ad99\n[2018-09-21 08:17:01,173] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-09-21 08:17:01,173] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-09-21 08:17:01,174] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/ef1b7865-3d5e-4e63-885e-e798696a27d3\n[2018-09-21 08:17:21,021] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS\n\n[2018-09-21 08:17:21,021] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/09/21 08:17:01 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/09/21 08:17:01 AM] [INFO] Ifcfg net config provider created.\n[2018/09/21 08:17:01 AM] [INFO] Not using any mapping file.\n[2018/09/21 08:17:01 AM] [INFO] Finding active nics\n[2018/09/21 08:17:01 AM] [INFO] eth2 is an embedded active nic\n[2018/09/21 08:17:01 AM] [INFO] eth0 is an embedded active nic\n[2018/09/21 08:17:01 AM] [INFO] eth1 is an embedded active nic\n[2018/09/21 08:17:01 AM] [INFO] lo is not an active nic\n[2018/09/21 08:17:01 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/09/21 08:17:01 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/09/21 08:17:01 AM] [INFO] nic3 mapped to: eth2\n[2018/09/21 08:17:01 AM] [INFO] nic2 mapped to: eth1\n[2018/09/21 08:17:01 AM] [INFO] nic1 mapped to: eth0\n[2018/09/21 08:17:01 AM] [INFO] adding interface: eth0\n[2018/09/21 08:17:01 AM] [INFO] adding custom route for interface: eth0\n[2018/09/21 08:17:01 AM] [INFO] adding bridge: br-isolated\n[2018/09/21 08:17:01 AM] [INFO] adding interface: eth1\n[2018/09/21 08:17:01 AM] [INFO] adding vlan: vlan20\n[2018/09/21 08:17:01 AM] [INFO] adding vlan: vlan30\n[2018/09/21 08:17:01 AM] [INFO] adding vlan: vlan50\n[2018/09/21 08:17:01 AM] [INFO] adding interface: eth2\n[2018/09/21 08:17:01 AM] [INFO] applying network configs...\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan20\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan30\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan50\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: eth2\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: eth1\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: eth0\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan20\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan30\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan50\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/09/21 08:17:02 AM] [INFO] running ifup on bridge: br-isolated\n[2018/09/21 08:17:02 AM] [INFO] running ifup on interface: eth2\n[2018/09/21 08:17:02 AM] [INFO] running ifup on interface: eth1\n[2018/09/21 08:17:03 AM] [INFO] running ifup on interface: eth0\n[2018/09/21 08:17:07 AM] [INFO] running ifup on interface: vlan20\n[2018/09/21 08:17:11 AM] [INFO] running ifup on interface: vlan30\n[2018/09/21 08:17:15 AM] [INFO] running ifup on interface: vlan50\n[2018/09/21 08:17:20 AM] [INFO] running ifup on interface: vlan20\n[2018/09/21 08:17:20 AM] [INFO] running ifup on interface: vlan30\n[2018/09/21 08:17:20 AM] [INFO] running ifup on interface: vlan50\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.2\n++ '[' -n 192.168.24.2 ']'\n++ break\n++ echo 192.168.24.2\n+ local METADATA_IP=192.168.24.2\n+ '[' -n 192.168.24.2 ']'\n+ is_local_ip 192.168.24.2\n+ local IP_TO_CHECK=192.168.24.2\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.2/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\n+ _ping=ping\n+ [[ 192.168.24.2 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.2\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-09-21 08:17:21,021] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/ef1b7865-3d5e-4e63-885e-e798696a27d3\n\n[2018-09-21 08:17:21,026] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-09-21 08:17:21,026] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ef1b7865-3d5e-4e63-885e-e798696a27d3.json < /var/lib/heat-config/deployed/ef1b7865-3d5e-4e63-885e-e798696a27d3.notify.json\n[2018-09-21 08:17:21,475] (heat-config) [INFO] \n[2018-09-21 08:17:21,475] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:17:01,141] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/ef1b7865-3d5e-4e63-885e-e798696a27d3.json", "[2018-09-21 08:17:21,025] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/09/21 08:17:01 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/09/21 08:17:01 AM] [INFO] Ifcfg net config provider created.\\n[2018/09/21 08:17:01 AM] [INFO] Not using any mapping file.\\n[2018/09/21 08:17:01 AM] [INFO] Finding active nics\\n[2018/09/21 08:17:01 AM] [INFO] eth2 is an embedded active nic\\n[2018/09/21 08:17:01 AM] [INFO] eth0 is an embedded active nic\\n[2018/09/21 08:17:01 AM] [INFO] eth1 is an embedded active nic\\n[2018/09/21 08:17:01 AM] [INFO] lo is not an active nic\\n[2018/09/21 08:17:01 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/09/21 08:17:01 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/09/21 08:17:01 AM] [INFO] nic3 mapped to: eth2\\n[2018/09/21 08:17:01 AM] [INFO] nic2 mapped to: eth1\\n[2018/09/21 08:17:01 AM] [INFO] nic1 mapped to: eth0\\n[2018/09/21 08:17:01 AM] [INFO] adding interface: eth0\\n[2018/09/21 08:17:01 AM] [INFO] adding custom route for interface: eth0\\n[2018/09/21 08:17:01 AM] [INFO] adding bridge: br-isolated\\n[2018/09/21 08:17:01 AM] [INFO] adding interface: eth1\\n[2018/09/21 08:17:01 AM] [INFO] adding vlan: vlan20\\n[2018/09/21 08:17:01 AM] [INFO] adding vlan: vlan30\\n[2018/09/21 08:17:01 AM] [INFO] adding vlan: vlan50\\n[2018/09/21 08:17:01 AM] [INFO] adding interface: eth2\\n[2018/09/21 08:17:01 AM] [INFO] applying network configs...\\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan20\\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan30\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan50\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: eth2\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: eth1\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: eth0\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan20\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan30\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan50\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/09/21 08:17:02 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/09/21 08:17:02 AM] [INFO] running ifup on interface: eth2\\n[2018/09/21 08:17:02 AM] [INFO] running ifup on interface: eth1\\n[2018/09/21 08:17:03 AM] [INFO] running ifup on interface: eth0\\n[2018/09/21 08:17:07 AM] [INFO] running ifup on interface: vlan20\\n[2018/09/21 08:17:11 AM] [INFO] running ifup on interface: vlan30\\n[2018/09/21 08:17:15 AM] [INFO] running ifup on interface: vlan50\\n[2018/09/21 08:17:20 AM] [INFO] running ifup on interface: vlan20\\n[2018/09/21 08:17:20 AM] [INFO] running ifup on interface: vlan30\\n[2018/09/21 08:17:20 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-09-21 08:17:21,026] (heat-config) [DEBUG] [2018-09-21 08:17:01,173] (heat-config) [INFO] interface_name=nic1", "[2018-09-21 08:17:01,173] (heat-config) [INFO] bridge_name=br-ex", "[2018-09-21 08:17:01,173] (heat-config) [INFO] deploy_server_id=41df08ab-b98d-4a4e-b91d-2da74cba2af6", "[2018-09-21 08:17:01,173] (heat-config) [INFO] deploy_action=CREATE", "[2018-09-21 08:17:01,173] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-vwvs6vnygabp-0-3khh4rv6h5h3-NetworkDeployment-rdzsxqqht5w2-TripleOSoftwareDeployment-sk2ep3fyeybk/a94d7fe2-1a34-492b-8777-37e7b5e5ad99", "[2018-09-21 08:17:01,173] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-09-21 08:17:01,173] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-09-21 08:17:01,174] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/ef1b7865-3d5e-4e63-885e-e798696a27d3", "[2018-09-21 08:17:21,021] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS", "", "[2018-09-21 08:17:21,021] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/09/21 08:17:01 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/09/21 08:17:01 AM] [INFO] Ifcfg net config provider created.", "[2018/09/21 08:17:01 AM] [INFO] Not using any mapping file.", "[2018/09/21 08:17:01 AM] [INFO] Finding active nics", "[2018/09/21 08:17:01 AM] [INFO] eth2 is an embedded active nic", "[2018/09/21 08:17:01 AM] [INFO] eth0 is an embedded active nic", "[2018/09/21 08:17:01 AM] [INFO] eth1 is an embedded active nic", "[2018/09/21 08:17:01 AM] [INFO] lo is not an active nic", "[2018/09/21 08:17:01 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/09/21 08:17:01 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/09/21 08:17:01 AM] [INFO] nic3 mapped to: eth2", "[2018/09/21 08:17:01 AM] [INFO] nic2 mapped to: eth1", "[2018/09/21 08:17:01 AM] [INFO] nic1 mapped to: eth0", "[2018/09/21 08:17:01 AM] [INFO] adding interface: eth0", "[2018/09/21 08:17:01 AM] [INFO] adding custom route for interface: eth0", "[2018/09/21 08:17:01 AM] [INFO] adding bridge: br-isolated", "[2018/09/21 08:17:01 AM] [INFO] adding interface: eth1", "[2018/09/21 08:17:01 AM] [INFO] adding vlan: vlan20", "[2018/09/21 08:17:01 AM] [INFO] adding vlan: vlan30", "[2018/09/21 08:17:01 AM] [INFO] adding vlan: vlan50", "[2018/09/21 08:17:01 AM] [INFO] adding interface: eth2", "[2018/09/21 08:17:01 AM] [INFO] applying network configs...", "[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan20", "[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan30", "[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan50", "[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: eth2", "[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: eth1", "[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: eth0", "[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan20", "[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan30", "[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan50", "[2018/09/21 08:17:02 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/09/21 08:17:02 AM] [INFO] running ifup on bridge: br-isolated", "[2018/09/21 08:17:02 AM] [INFO] running ifup on interface: eth2", "[2018/09/21 08:17:02 AM] [INFO] running ifup on interface: eth1", "[2018/09/21 08:17:03 AM] [INFO] running ifup on interface: eth0", "[2018/09/21 08:17:07 AM] [INFO] running ifup on interface: vlan20", "[2018/09/21 08:17:11 AM] [INFO] running ifup on interface: vlan30", "[2018/09/21 08:17:15 AM] [INFO] running ifup on interface: vlan50", "[2018/09/21 08:17:20 AM] [INFO] running ifup on interface: vlan20", "[2018/09/21 08:17:20 AM] [INFO] running ifup on interface: vlan30", "[2018/09/21 08:17:20 AM] [INFO] running ifup on interface: vlan50", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.2", "++ '[' -n 192.168.24.2 ']'", "++ break", "++ echo 192.168.24.2", "+ local METADATA_IP=192.168.24.2", "+ '[' -n 192.168.24.2 ']'", "+ is_local_ip 192.168.24.2", "+ local IP_TO_CHECK=192.168.24.2", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.2/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.2...'", "+ _ping=ping", "+ [[ 192.168.24.2 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.2", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-09-21 08:17:21,021] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/ef1b7865-3d5e-4e63-885e-e798696a27d3", "", "[2018-09-21 08:17:21,026] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-09-21 08:17:21,026] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ef1b7865-3d5e-4e63-885e-e798696a27d3.json < /var/lib/heat-config/deployed/ef1b7865-3d5e-4e63-885e-e798696a27d3.notify.json", "[2018-09-21 08:17:21,475] (heat-config) [INFO] ", "[2018-09-21 08:17:21,475] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > >TASK [Output for NetworkDeployment] ******************************************** >Friday 21 September 2018 08:17:21 -0400 (0:00:20.811) 0:00:44.012 ****** >ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:17:12,573] (heat-config) [WARNING] Skipping config 95c736a3-01d9-4926-a01f-973b9789b07f, already deployed", > "[2018-09-21 08:17:12,573] (heat-config) [WARNING] To force-deploy, rm /var/lib/heat-config/deployed/95c736a3-01d9-4926-a01f-973b9789b07f.json", > "jq: /var/lib/heat-config/deployed/95c736a3-01d9-4926-a01f-973b9789b07f.notify.json: No such file or directory" > ] > }, > { > "status_code": "0" > } > ] >} >ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:17:01,141] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/ef1b7865-3d5e-4e63-885e-e798696a27d3.json", > "[2018-09-21 08:17:21,025] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/09/21 08:17:01 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/09/21 08:17:01 AM] [INFO] Ifcfg net config provider created.\\n[2018/09/21 08:17:01 AM] [INFO] Not using any mapping file.\\n[2018/09/21 08:17:01 AM] [INFO] Finding active nics\\n[2018/09/21 08:17:01 AM] [INFO] eth2 is an embedded active nic\\n[2018/09/21 08:17:01 AM] [INFO] eth0 is an embedded active nic\\n[2018/09/21 08:17:01 AM] [INFO] eth1 is an embedded active nic\\n[2018/09/21 08:17:01 AM] [INFO] lo is not an active nic\\n[2018/09/21 08:17:01 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/09/21 08:17:01 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/09/21 08:17:01 AM] [INFO] nic3 mapped to: eth2\\n[2018/09/21 08:17:01 AM] [INFO] nic2 mapped to: eth1\\n[2018/09/21 08:17:01 AM] [INFO] nic1 mapped to: eth0\\n[2018/09/21 08:17:01 AM] [INFO] adding interface: eth0\\n[2018/09/21 08:17:01 AM] [INFO] adding custom route for interface: eth0\\n[2018/09/21 08:17:01 AM] [INFO] adding bridge: br-isolated\\n[2018/09/21 08:17:01 AM] [INFO] adding interface: eth1\\n[2018/09/21 08:17:01 AM] [INFO] adding vlan: vlan20\\n[2018/09/21 08:17:01 AM] [INFO] adding vlan: vlan30\\n[2018/09/21 08:17:01 AM] [INFO] adding vlan: vlan50\\n[2018/09/21 08:17:01 AM] [INFO] adding interface: eth2\\n[2018/09/21 08:17:01 AM] [INFO] applying network configs...\\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan20\\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan30\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan50\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: eth2\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: eth1\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: eth0\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan20\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan30\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan50\\n[2018/09/21 08:17:02 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/09/21 08:17:02 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/09/21 08:17:02 AM] [INFO] running ifup on interface: eth2\\n[2018/09/21 08:17:02 AM] [INFO] running ifup on interface: eth1\\n[2018/09/21 08:17:03 AM] [INFO] running ifup on interface: eth0\\n[2018/09/21 08:17:07 AM] [INFO] running ifup on interface: vlan20\\n[2018/09/21 08:17:11 AM] [INFO] running ifup on interface: vlan30\\n[2018/09/21 08:17:15 AM] [INFO] running ifup on interface: vlan50\\n[2018/09/21 08:17:20 AM] [INFO] running ifup on interface: vlan20\\n[2018/09/21 08:17:20 AM] [INFO] running ifup on interface: vlan30\\n[2018/09/21 08:17:20 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-09-21 08:17:21,026] (heat-config) [DEBUG] [2018-09-21 08:17:01,173] (heat-config) [INFO] interface_name=nic1", > "[2018-09-21 08:17:01,173] (heat-config) [INFO] bridge_name=br-ex", > "[2018-09-21 08:17:01,173] (heat-config) [INFO] deploy_server_id=41df08ab-b98d-4a4e-b91d-2da74cba2af6", > "[2018-09-21 08:17:01,173] (heat-config) [INFO] deploy_action=CREATE", > "[2018-09-21 08:17:01,173] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-vwvs6vnygabp-0-3khh4rv6h5h3-NetworkDeployment-rdzsxqqht5w2-TripleOSoftwareDeployment-sk2ep3fyeybk/a94d7fe2-1a34-492b-8777-37e7b5e5ad99", > "[2018-09-21 08:17:01,173] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-09-21 08:17:01,173] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-09-21 08:17:01,174] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/ef1b7865-3d5e-4e63-885e-e798696a27d3", > "[2018-09-21 08:17:21,021] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS", > "", > "[2018-09-21 08:17:21,021] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/09/21 08:17:01 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/09/21 08:17:01 AM] [INFO] Ifcfg net config provider created.", > "[2018/09/21 08:17:01 AM] [INFO] Not using any mapping file.", > "[2018/09/21 08:17:01 AM] [INFO] Finding active nics", > "[2018/09/21 08:17:01 AM] [INFO] eth2 is an embedded active nic", > "[2018/09/21 08:17:01 AM] [INFO] eth0 is an embedded active nic", > "[2018/09/21 08:17:01 AM] [INFO] eth1 is an embedded active nic", > "[2018/09/21 08:17:01 AM] [INFO] lo is not an active nic", > "[2018/09/21 08:17:01 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/09/21 08:17:01 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/09/21 08:17:01 AM] [INFO] nic3 mapped to: eth2", > "[2018/09/21 08:17:01 AM] [INFO] nic2 mapped to: eth1", > "[2018/09/21 08:17:01 AM] [INFO] nic1 mapped to: eth0", > "[2018/09/21 08:17:01 AM] [INFO] adding interface: eth0", > "[2018/09/21 08:17:01 AM] [INFO] adding custom route for interface: eth0", > "[2018/09/21 08:17:01 AM] [INFO] adding bridge: br-isolated", > "[2018/09/21 08:17:01 AM] [INFO] adding interface: eth1", > "[2018/09/21 08:17:01 AM] [INFO] adding vlan: vlan20", > "[2018/09/21 08:17:01 AM] [INFO] adding vlan: vlan30", > "[2018/09/21 08:17:01 AM] [INFO] adding vlan: vlan50", > "[2018/09/21 08:17:01 AM] [INFO] adding interface: eth2", > "[2018/09/21 08:17:01 AM] [INFO] applying network configs...", > "[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan20", > "[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan30", > "[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan50", > "[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: eth2", > "[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: eth1", > "[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: eth0", > "[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan20", > "[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan30", > "[2018/09/21 08:17:02 AM] [INFO] running ifdown on interface: vlan50", > "[2018/09/21 08:17:02 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", > "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", > "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", > "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", > "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", > "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", > "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", > "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", > "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", > "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/09/21 08:17:02 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/09/21 08:17:02 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/09/21 08:17:02 AM] [INFO] running ifup on interface: eth2", > "[2018/09/21 08:17:02 AM] [INFO] running ifup on interface: eth1", > "[2018/09/21 08:17:03 AM] [INFO] running ifup on interface: eth0", > "[2018/09/21 08:17:07 AM] [INFO] running ifup on interface: vlan20", > "[2018/09/21 08:17:11 AM] [INFO] running ifup on interface: vlan30", > "[2018/09/21 08:17:15 AM] [INFO] running ifup on interface: vlan50", > "[2018/09/21 08:17:20 AM] [INFO] running ifup on interface: vlan20", > "[2018/09/21 08:17:20 AM] [INFO] running ifup on interface: vlan30", > "[2018/09/21 08:17:20 AM] [INFO] running ifup on interface: vlan50", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.2", > "++ '[' -n 192.168.24.2 ']'", > "++ break", > "++ echo 192.168.24.2", > "+ local METADATA_IP=192.168.24.2", > "+ '[' -n 192.168.24.2 ']'", > "+ is_local_ip 192.168.24.2", > "+ local IP_TO_CHECK=192.168.24.2", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.2/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.2...'", > "+ _ping=ping", > "+ [[ 192.168.24.2 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.2", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-09-21 08:17:21,021] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/ef1b7865-3d5e-4e63-885e-e798696a27d3", > "", > "[2018-09-21 08:17:21,026] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-09-21 08:17:21,026] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ef1b7865-3d5e-4e63-885e-e798696a27d3.json < /var/lib/heat-config/deployed/ef1b7865-3d5e-4e63-885e-e798696a27d3.notify.json", > "[2018-09-21 08:17:21,475] (heat-config) [INFO] ", > "[2018-09-21 08:17:21,475] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:17:00,081] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/e633e879-7a65-4081-ad7b-bf66c7035600.json", > "[2018-09-21 08:17:15,303] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.6/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.11/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.6/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.11/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/09/21 08:17:00 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/09/21 08:17:00 AM] [INFO] Ifcfg net config provider created.\\n[2018/09/21 08:17:00 AM] [INFO] Not using any mapping file.\\n[2018/09/21 08:17:00 AM] [INFO] Finding active nics\\n[2018/09/21 08:17:00 AM] [INFO] eth2 is an embedded active nic\\n[2018/09/21 08:17:00 AM] [INFO] eth1 is an embedded active nic\\n[2018/09/21 08:17:00 AM] [INFO] eth0 is an embedded active nic\\n[2018/09/21 08:17:00 AM] [INFO] lo is not an active nic\\n[2018/09/21 08:17:00 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/09/21 08:17:00 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/09/21 08:17:00 AM] [INFO] nic3 mapped to: eth2\\n[2018/09/21 08:17:00 AM] [INFO] nic2 mapped to: eth1\\n[2018/09/21 08:17:00 AM] [INFO] nic1 mapped to: eth0\\n[2018/09/21 08:17:00 AM] [INFO] adding interface: eth0\\n[2018/09/21 08:17:00 AM] [INFO] adding custom route for interface: eth0\\n[2018/09/21 08:17:00 AM] [INFO] adding bridge: br-isolated\\n[2018/09/21 08:17:00 AM] [INFO] adding interface: eth1\\n[2018/09/21 08:17:00 AM] [INFO] adding vlan: vlan30\\n[2018/09/21 08:17:00 AM] [INFO] adding vlan: vlan40\\n[2018/09/21 08:17:00 AM] [INFO] applying network configs...\\n[2018/09/21 08:17:00 AM] [INFO] running ifdown on interface: vlan30\\n[2018/09/21 08:17:00 AM] [INFO] running ifdown on interface: vlan40\\n[2018/09/21 08:17:00 AM] [INFO] running ifdown on interface: eth1\\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: eth0\\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan30\\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan40\\n[2018/09/21 08:17:01 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/09/21 08:17:01 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/09/21 08:17:01 AM] [INFO] running ifup on interface: eth1\\n[2018/09/21 08:17:01 AM] [INFO] running ifup on interface: eth0\\n[2018/09/21 08:17:06 AM] [INFO] running ifup on interface: vlan30\\n[2018/09/21 08:17:10 AM] [INFO] running ifup on interface: vlan40\\n[2018/09/21 08:17:14 AM] [INFO] running ifup on interface: vlan30\\n[2018/09/21 08:17:14 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-09-21 08:17:15,303] (heat-config) [DEBUG] [2018-09-21 08:17:00,107] (heat-config) [INFO] interface_name=nic1", > "[2018-09-21 08:17:00,108] (heat-config) [INFO] bridge_name=br-ex", > "[2018-09-21 08:17:00,108] (heat-config) [INFO] deploy_server_id=f4c5361d-7430-47e6-b3a8-908850a79a79", > "[2018-09-21 08:17:00,108] (heat-config) [INFO] deploy_action=CREATE", > "[2018-09-21 08:17:00,108] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-bre5qklthc6f-0-gxpifqih23ze-NetworkDeployment-263ctk3zuvkd-TripleOSoftwareDeployment-6xum27qyfsh4/a21c346e-502b-4711-aaa0-a66c11d2513e", > "[2018-09-21 08:17:00,108] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-09-21 08:17:00,108] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-09-21 08:17:00,108] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/e633e879-7a65-4081-ad7b-bf66c7035600", > "[2018-09-21 08:17:15,299] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS", > "", > "[2018-09-21 08:17:15,299] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.6/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.11/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.6/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.11/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/09/21 08:17:00 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/09/21 08:17:00 AM] [INFO] Ifcfg net config provider created.", > "[2018/09/21 08:17:00 AM] [INFO] Not using any mapping file.", > "[2018/09/21 08:17:00 AM] [INFO] Finding active nics", > "[2018/09/21 08:17:00 AM] [INFO] eth2 is an embedded active nic", > "[2018/09/21 08:17:00 AM] [INFO] eth1 is an embedded active nic", > "[2018/09/21 08:17:00 AM] [INFO] eth0 is an embedded active nic", > "[2018/09/21 08:17:00 AM] [INFO] lo is not an active nic", > "[2018/09/21 08:17:00 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/09/21 08:17:00 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/09/21 08:17:00 AM] [INFO] nic3 mapped to: eth2", > "[2018/09/21 08:17:00 AM] [INFO] nic2 mapped to: eth1", > "[2018/09/21 08:17:00 AM] [INFO] nic1 mapped to: eth0", > "[2018/09/21 08:17:00 AM] [INFO] adding interface: eth0", > "[2018/09/21 08:17:00 AM] [INFO] adding custom route for interface: eth0", > "[2018/09/21 08:17:00 AM] [INFO] adding bridge: br-isolated", > "[2018/09/21 08:17:00 AM] [INFO] adding interface: eth1", > "[2018/09/21 08:17:00 AM] [INFO] adding vlan: vlan30", > "[2018/09/21 08:17:00 AM] [INFO] adding vlan: vlan40", > "[2018/09/21 08:17:00 AM] [INFO] applying network configs...", > "[2018/09/21 08:17:00 AM] [INFO] running ifdown on interface: vlan30", > "[2018/09/21 08:17:00 AM] [INFO] running ifdown on interface: vlan40", > "[2018/09/21 08:17:00 AM] [INFO] running ifdown on interface: eth1", > "[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: eth0", > "[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan30", > "[2018/09/21 08:17:01 AM] [INFO] running ifdown on interface: vlan40", > "[2018/09/21 08:17:01 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", > "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", > "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", > "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/09/21 08:17:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/09/21 08:17:01 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/09/21 08:17:01 AM] [INFO] running ifup on interface: eth1", > "[2018/09/21 08:17:01 AM] [INFO] running ifup on interface: eth0", > "[2018/09/21 08:17:06 AM] [INFO] running ifup on interface: vlan30", > "[2018/09/21 08:17:10 AM] [INFO] running ifup on interface: vlan40", > "[2018/09/21 08:17:14 AM] [INFO] running ifup on interface: vlan30", > "[2018/09/21 08:17:14 AM] [INFO] running ifup on interface: vlan40", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.2", > "++ '[' -n 192.168.24.2 ']'", > "++ break", > "++ echo 192.168.24.2", > "+ local METADATA_IP=192.168.24.2", > "+ '[' -n 192.168.24.2 ']'", > "+ is_local_ip 192.168.24.2", > "+ local IP_TO_CHECK=192.168.24.2", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.2/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.2...'", > "+ _ping=ping", > "+ [[ 192.168.24.2 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.2", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-09-21 08:17:15,300] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/e633e879-7a65-4081-ad7b-bf66c7035600", > "", > "[2018-09-21 08:17:15,304] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-09-21 08:17:15,304] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/e633e879-7a65-4081-ad7b-bf66c7035600.json < /var/lib/heat-config/deployed/e633e879-7a65-4081-ad7b-bf66c7035600.notify.json", > "[2018-09-21 08:17:15,729] (heat-config) [INFO] ", > "[2018-09-21 08:17:15,729] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment NetworkDeployment] ************************* >Friday 21 September 2018 08:17:21 -0400 (0:00:00.136) 0:00:44.149 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:17:21 -0400 (0:00:00.100) 0:00:44.250 ****** >ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "f7a3988e-0377-407d-9231-2239387e1794"}, "changed": false} > >TASK [Render deployment file for ControllerUpgradeInitDeployment] ************** >Friday 21 September 2018 08:17:21 -0400 (0:00:00.077) 0:00:44.328 ****** >changed: [controller-0] => {"changed": true, "checksum": "397851fb0ba935b07244a8865b8519703d7feb95", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerUpgradeInitDeployment-f7a3988e-0377-407d-9231-2239387e1794", "gid": 0, "group": "root", "md5sum": "5f62c8f8efea6fcca474bef9db8abbf8", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1183, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532241.9-232910261664137/source", "state": "file", "uid": 0} > >TASK [Check if deployed file exists for ControllerUpgradeInitDeployment] ******* >Friday 21 September 2018 08:17:22 -0400 (0:00:00.541) 0:00:44.870 ****** >ok: [controller-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for ControllerUpgradeInitDeployment] ******** >Friday 21 September 2018 08:17:22 -0400 (0:00:00.212) 0:00:45.083 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for ControllerUpgradeInitDeployment when previous deployment failed] *** >Friday 21 September 2018 08:17:22 -0400 (0:00:00.039) 0:00:45.122 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for ControllerUpgradeInitDeployment] ********** >Friday 21 September 2018 08:17:22 -0400 (0:00:00.045) 0:00:45.167 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment ControllerUpgradeInitDeployment] ************************** >Friday 21 September 2018 08:17:22 -0400 (0:00:00.042) 0:00:45.210 ****** >changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/f7a3988e-0377-407d-9231-2239387e1794.notify.json)", "delta": "0:00:00.520755", "end": "2018-09-21 08:17:23.449531", "rc": 0, "start": "2018-09-21 08:17:22.928776", "stderr": "[2018-09-21 08:17:22,953] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f7a3988e-0377-407d-9231-2239387e1794.json\n[2018-09-21 08:17:22,984] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-09-21 08:17:22,984] (heat-config) [DEBUG] [2018-09-21 08:17:22,976] (heat-config) [INFO] deploy_server_id=10bf651c-fd66-4074-9929-ddfdd495b40b\n[2018-09-21 08:17:22,976] (heat-config) [INFO] deploy_action=CREATE\n[2018-09-21 08:17:22,976] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-afojl4kgiunp-0-bqr7pcxdlsbr-ControllerUpgradeInitDeployment-sra77sidbdjt/428f8eb7-cfd4-4b7f-b884-c5905535f0bd\n[2018-09-21 08:17:22,976] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-09-21 08:17:22,976] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-09-21 08:17:22,977] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f7a3988e-0377-407d-9231-2239387e1794\n[2018-09-21 08:17:22,980] (heat-config) [INFO] \n[2018-09-21 08:17:22,980] (heat-config) [DEBUG] \n[2018-09-21 08:17:22,980] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f7a3988e-0377-407d-9231-2239387e1794\n\n[2018-09-21 08:17:22,984] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-09-21 08:17:22,984] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f7a3988e-0377-407d-9231-2239387e1794.json < /var/lib/heat-config/deployed/f7a3988e-0377-407d-9231-2239387e1794.notify.json\n[2018-09-21 08:17:23,443] (heat-config) [INFO] \n[2018-09-21 08:17:23,443] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:17:22,953] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f7a3988e-0377-407d-9231-2239387e1794.json", "[2018-09-21 08:17:22,984] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-09-21 08:17:22,984] (heat-config) [DEBUG] [2018-09-21 08:17:22,976] (heat-config) [INFO] deploy_server_id=10bf651c-fd66-4074-9929-ddfdd495b40b", "[2018-09-21 08:17:22,976] (heat-config) [INFO] deploy_action=CREATE", "[2018-09-21 08:17:22,976] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-afojl4kgiunp-0-bqr7pcxdlsbr-ControllerUpgradeInitDeployment-sra77sidbdjt/428f8eb7-cfd4-4b7f-b884-c5905535f0bd", "[2018-09-21 08:17:22,976] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-09-21 08:17:22,976] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-09-21 08:17:22,977] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f7a3988e-0377-407d-9231-2239387e1794", "[2018-09-21 08:17:22,980] (heat-config) [INFO] ", "[2018-09-21 08:17:22,980] (heat-config) [DEBUG] ", "[2018-09-21 08:17:22,980] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f7a3988e-0377-407d-9231-2239387e1794", "", "[2018-09-21 08:17:22,984] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-09-21 08:17:22,984] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f7a3988e-0377-407d-9231-2239387e1794.json < /var/lib/heat-config/deployed/f7a3988e-0377-407d-9231-2239387e1794.notify.json", "[2018-09-21 08:17:23,443] (heat-config) [INFO] ", "[2018-09-21 08:17:23,443] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > >TASK [Output for ControllerUpgradeInitDeployment] ****************************** >Friday 21 September 2018 08:17:23 -0400 (0:00:00.744) 0:00:45.954 ****** >ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:17:22,953] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f7a3988e-0377-407d-9231-2239387e1794.json", > "[2018-09-21 08:17:22,984] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-09-21 08:17:22,984] (heat-config) [DEBUG] [2018-09-21 08:17:22,976] (heat-config) [INFO] deploy_server_id=10bf651c-fd66-4074-9929-ddfdd495b40b", > "[2018-09-21 08:17:22,976] (heat-config) [INFO] deploy_action=CREATE", > "[2018-09-21 08:17:22,976] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-afojl4kgiunp-0-bqr7pcxdlsbr-ControllerUpgradeInitDeployment-sra77sidbdjt/428f8eb7-cfd4-4b7f-b884-c5905535f0bd", > "[2018-09-21 08:17:22,976] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-09-21 08:17:22,976] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-09-21 08:17:22,977] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f7a3988e-0377-407d-9231-2239387e1794", > "[2018-09-21 08:17:22,980] (heat-config) [INFO] ", > "[2018-09-21 08:17:22,980] (heat-config) [DEBUG] ", > "[2018-09-21 08:17:22,980] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f7a3988e-0377-407d-9231-2239387e1794", > "", > "[2018-09-21 08:17:22,984] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-09-21 08:17:22,984] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f7a3988e-0377-407d-9231-2239387e1794.json < /var/lib/heat-config/deployed/f7a3988e-0377-407d-9231-2239387e1794.notify.json", > "[2018-09-21 08:17:23,443] (heat-config) [INFO] ", > "[2018-09-21 08:17:23,443] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment ControllerUpgradeInitDeployment] *********** >Friday 21 September 2018 08:17:23 -0400 (0:00:00.086) 0:00:46.040 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:17:23 -0400 (0:00:00.041) 0:00:46.082 ****** >ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "9b827f38-6cee-4d67-875b-9cbad56ece91"}, "changed": false} >ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "0659dbc6-c804-42c2-be89-b145c79bea70"}, "changed": false} >ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "8b37423a-dee6-432c-be57-782cbeabc6ac"}, "changed": false} > >TASK [Render deployment file for CADeployment] ********************************* >Friday 21 September 2018 08:17:23 -0400 (0:00:00.156) 0:00:46.239 ****** >changed: [controller-0] => {"changed": true, "checksum": "39ae15cdc39529e401a11fe62571a89cce0e9e44", "dest": "/var/lib/heat-config/tripleo-config-download/CADeployment-9b827f38-6cee-4d67-875b-9cbad56ece91", "gid": 0, "group": "root", "md5sum": "21608fe0d7f8cb1905e4a20ef1da8d5d", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2999, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532243.82-264325609717397/source", "state": "file", "uid": 0} >changed: [compute-0] => {"changed": true, "checksum": "cfc27834c7ce453f134b65c14f94dcc310fae699", "dest": "/var/lib/heat-config/tripleo-config-download/CADeployment-0659dbc6-c804-42c2-be89-b145c79bea70", "gid": 0, "group": "root", "md5sum": "681fa54e44d5e28bb9d81119bcda730d", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2996, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532243.85-173058636473503/source", "state": "file", "uid": 0} >changed: [ceph-0] => {"changed": true, "checksum": "9d59ed4fbfa7e1187bfb556600547c7764437f52", "dest": "/var/lib/heat-config/tripleo-config-download/CADeployment-8b37423a-dee6-432c-be57-782cbeabc6ac", "gid": 0, "group": "root", "md5sum": "8ee1f9a6dab8ab897f32d50b4466c84b", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 3000, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532243.88-66571091218345/source", "state": "file", "uid": 0} > >TASK [Check if deployed file exists for CADeployment] ************************** >Friday 21 September 2018 08:17:24 -0400 (0:00:00.658) 0:00:46.897 ****** >ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for CADeployment] *************************** >Friday 21 September 2018 08:17:24 -0400 (0:00:00.292) 0:00:47.190 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for CADeployment when previous deployment failed] *** >Friday 21 September 2018 08:17:24 -0400 (0:00:00.097) 0:00:47.287 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for CADeployment] ***************************** >Friday 21 September 2018 08:17:24 -0400 (0:00:00.094) 0:00:47.382 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment CADeployment] ********************************************* >Friday 21 September 2018 08:17:24 -0400 (0:00:00.089) 0:00:47.471 ****** >changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/8b37423a-dee6-432c-be57-782cbeabc6ac.notify.json)", "delta": "0:00:01.080089", "end": "2018-09-21 08:17:25.255885", "rc": 0, "start": "2018-09-21 08:17:24.175796", "stderr": "[2018-09-21 08:17:24,201] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/8b37423a-dee6-432c-be57-782cbeabc6ac.json\n[2018-09-21 08:17:24,889] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"b2961e1bb192ef8842515ec0320ce302 /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}\n[2018-09-21 08:17:24,890] (heat-config) [DEBUG] [2018-09-21 08:17:24,228] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem\n[2018-09-21 08:17:24,228] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----\nMIIDlzCCAn+gAwIBAgIJALY0EJYbjjVMMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV\nBAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH\nUmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x\nODA5MjExMTEzNDVaFw0xOTA5MjExMTEzNDVaMGIxCzAJBgNVBAYTAlVTMQswCQYD\nVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG\nA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBAOloaqP9mv8rvusa+RWPmecsF0SNauqiXOiuSL4/d/aS\nGwqRNNoHRCqgQQaS71dNFhyAfk2eI9MpghGfgqOgNeHXvg2q38PLiPuWhUO8mIos\nAKh7O3Vtb2GZEHT+BTWUcfVHaidxuKA5ij3mYWk1CAZYsxmjh3iYcmZo+GnLIfru\nn2FfWXR7ftvLZARE25Kj+7MwjRi7BU2BiILU+0BOPJEZ29pVCyEOvq366jLsWiwR\n7izLQG8ZKZJIIBvjyz79u7GKPoaLZphyj37/fzX5C2cIlIdfEXXp1VjFKrfygThH\nGspagw579hIqPt4ZoGqtiqRy4w7GPxGm6jXQPFnHa1ECAwEAAaNQME4wHQYDVR0O\nBBYEFJ4Fv94lhVAxsMKUAUQXw4Zn3ZavMB8GA1UdIwQYMBaAFJ4Fv94lhVAxsMKU\nAUQXw4Zn3ZavMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAGdpRYfj\n1WOBZPhBTSLDGIn0hQOyOW3+zakb+D4s68EQlZ7xZctD7Nt6kg1Xr9Jphxpc4R4w\nGTI61CGR8qW2gTTp2PgPBXxXY88Z+9OG64ODSstRzhi41/zE9lggOzGotGTrsoxh\nUNs9ACq2+Uem4GoD7790VY27wN3GFQEoj9lFTYl2mojIie8LEub1fJFIxXEpft3q\nHWkxqp418aMwlvlpoBnyXRyEdvarIxlbYdyh0Hy6xcb+q+JQC3glerMET1I8fs/M\nquGY47lurePIoIRZ3sm6UuIPluD/1xm89pvC3MPyyft/RxyPT2EUqIGLWLuUJxMR\nGKHRxeipfQuWl88=\n-----END CERTIFICATE-----\n[2018-09-21 08:17:24,228] (heat-config) [INFO] update_anchor_command=update-ca-trust extract\n[2018-09-21 08:17:24,228] (heat-config) [INFO] deploy_server_id=f4c5361d-7430-47e6-b3a8-908850a79a79\n[2018-09-21 08:17:24,228] (heat-config) [INFO] deploy_action=CREATE\n[2018-09-21 08:17:24,228] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-bre5qklthc6f-0-gxpifqih23ze-NodeTLSCAData-gzmyad5fe3rx-CADeployment-hmdt2qxcrpiw/a47e42b3-995c-4492-a546-884b32e4e80e\n[2018-09-21 08:17:24,228] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-09-21 08:17:24,228] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-09-21 08:17:24,229] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/8b37423a-dee6-432c-be57-782cbeabc6ac\n[2018-09-21 08:17:24,885] (heat-config) [INFO] \n[2018-09-21 08:17:24,886] (heat-config) [DEBUG] \n[2018-09-21 08:17:24,886] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/8b37423a-dee6-432c-be57-782cbeabc6ac\n\n[2018-09-21 08:17:24,890] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-09-21 08:17:24,890] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/8b37423a-dee6-432c-be57-782cbeabc6ac.json < /var/lib/heat-config/deployed/8b37423a-dee6-432c-be57-782cbeabc6ac.notify.json\n[2018-09-21 08:17:25,250] (heat-config) [INFO] \n[2018-09-21 08:17:25,250] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:17:24,201] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/8b37423a-dee6-432c-be57-782cbeabc6ac.json", "[2018-09-21 08:17:24,889] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"b2961e1bb192ef8842515ec0320ce302 /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", "[2018-09-21 08:17:24,890] (heat-config) [DEBUG] [2018-09-21 08:17:24,228] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", "[2018-09-21 08:17:24,228] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", "MIIDlzCCAn+gAwIBAgIJALY0EJYbjjVMMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", "ODA5MjExMTEzNDVaFw0xOTA5MjExMTEzNDVaMGIxCzAJBgNVBAYTAlVTMQswCQYD", "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", "BQADggEPADCCAQoCggEBAOloaqP9mv8rvusa+RWPmecsF0SNauqiXOiuSL4/d/aS", "GwqRNNoHRCqgQQaS71dNFhyAfk2eI9MpghGfgqOgNeHXvg2q38PLiPuWhUO8mIos", "AKh7O3Vtb2GZEHT+BTWUcfVHaidxuKA5ij3mYWk1CAZYsxmjh3iYcmZo+GnLIfru", "n2FfWXR7ftvLZARE25Kj+7MwjRi7BU2BiILU+0BOPJEZ29pVCyEOvq366jLsWiwR", "7izLQG8ZKZJIIBvjyz79u7GKPoaLZphyj37/fzX5C2cIlIdfEXXp1VjFKrfygThH", "Gspagw579hIqPt4ZoGqtiqRy4w7GPxGm6jXQPFnHa1ECAwEAAaNQME4wHQYDVR0O", "BBYEFJ4Fv94lhVAxsMKUAUQXw4Zn3ZavMB8GA1UdIwQYMBaAFJ4Fv94lhVAxsMKU", "AUQXw4Zn3ZavMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAGdpRYfj", "1WOBZPhBTSLDGIn0hQOyOW3+zakb+D4s68EQlZ7xZctD7Nt6kg1Xr9Jphxpc4R4w", "GTI61CGR8qW2gTTp2PgPBXxXY88Z+9OG64ODSstRzhi41/zE9lggOzGotGTrsoxh", "UNs9ACq2+Uem4GoD7790VY27wN3GFQEoj9lFTYl2mojIie8LEub1fJFIxXEpft3q", "HWkxqp418aMwlvlpoBnyXRyEdvarIxlbYdyh0Hy6xcb+q+JQC3glerMET1I8fs/M", "quGY47lurePIoIRZ3sm6UuIPluD/1xm89pvC3MPyyft/RxyPT2EUqIGLWLuUJxMR", "GKHRxeipfQuWl88=", "-----END CERTIFICATE-----", "[2018-09-21 08:17:24,228] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", "[2018-09-21 08:17:24,228] (heat-config) [INFO] deploy_server_id=f4c5361d-7430-47e6-b3a8-908850a79a79", "[2018-09-21 08:17:24,228] (heat-config) [INFO] deploy_action=CREATE", "[2018-09-21 08:17:24,228] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-bre5qklthc6f-0-gxpifqih23ze-NodeTLSCAData-gzmyad5fe3rx-CADeployment-hmdt2qxcrpiw/a47e42b3-995c-4492-a546-884b32e4e80e", "[2018-09-21 08:17:24,228] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-09-21 08:17:24,228] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-09-21 08:17:24,229] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/8b37423a-dee6-432c-be57-782cbeabc6ac", "[2018-09-21 08:17:24,885] (heat-config) [INFO] ", "[2018-09-21 08:17:24,886] (heat-config) [DEBUG] ", "[2018-09-21 08:17:24,886] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/8b37423a-dee6-432c-be57-782cbeabc6ac", "", "[2018-09-21 08:17:24,890] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-09-21 08:17:24,890] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/8b37423a-dee6-432c-be57-782cbeabc6ac.json < /var/lib/heat-config/deployed/8b37423a-dee6-432c-be57-782cbeabc6ac.notify.json", "[2018-09-21 08:17:25,250] (heat-config) [INFO] ", "[2018-09-21 08:17:25,250] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/9b827f38-6cee-4d67-875b-9cbad56ece91.notify.json)", "delta": "0:00:01.240386", "end": "2018-09-21 08:17:26.448142", "rc": 0, "start": "2018-09-21 08:17:25.207756", "stderr": "[2018-09-21 08:17:25,235] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/9b827f38-6cee-4d67-875b-9cbad56ece91.json\n[2018-09-21 08:17:25,998] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"b2961e1bb192ef8842515ec0320ce302 /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}\n[2018-09-21 08:17:25,998] (heat-config) [DEBUG] [2018-09-21 08:17:25,260] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem\n[2018-09-21 08:17:25,260] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----\nMIIDlzCCAn+gAwIBAgIJALY0EJYbjjVMMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV\nBAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH\nUmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x\nODA5MjExMTEzNDVaFw0xOTA5MjExMTEzNDVaMGIxCzAJBgNVBAYTAlVTMQswCQYD\nVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG\nA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBAOloaqP9mv8rvusa+RWPmecsF0SNauqiXOiuSL4/d/aS\nGwqRNNoHRCqgQQaS71dNFhyAfk2eI9MpghGfgqOgNeHXvg2q38PLiPuWhUO8mIos\nAKh7O3Vtb2GZEHT+BTWUcfVHaidxuKA5ij3mYWk1CAZYsxmjh3iYcmZo+GnLIfru\nn2FfWXR7ftvLZARE25Kj+7MwjRi7BU2BiILU+0BOPJEZ29pVCyEOvq366jLsWiwR\n7izLQG8ZKZJIIBvjyz79u7GKPoaLZphyj37/fzX5C2cIlIdfEXXp1VjFKrfygThH\nGspagw579hIqPt4ZoGqtiqRy4w7GPxGm6jXQPFnHa1ECAwEAAaNQME4wHQYDVR0O\nBBYEFJ4Fv94lhVAxsMKUAUQXw4Zn3ZavMB8GA1UdIwQYMBaAFJ4Fv94lhVAxsMKU\nAUQXw4Zn3ZavMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAGdpRYfj\n1WOBZPhBTSLDGIn0hQOyOW3+zakb+D4s68EQlZ7xZctD7Nt6kg1Xr9Jphxpc4R4w\nGTI61CGR8qW2gTTp2PgPBXxXY88Z+9OG64ODSstRzhi41/zE9lggOzGotGTrsoxh\nUNs9ACq2+Uem4GoD7790VY27wN3GFQEoj9lFTYl2mojIie8LEub1fJFIxXEpft3q\nHWkxqp418aMwlvlpoBnyXRyEdvarIxlbYdyh0Hy6xcb+q+JQC3glerMET1I8fs/M\nquGY47lurePIoIRZ3sm6UuIPluD/1xm89pvC3MPyyft/RxyPT2EUqIGLWLuUJxMR\nGKHRxeipfQuWl88=\n-----END CERTIFICATE-----\n[2018-09-21 08:17:25,260] (heat-config) [INFO] update_anchor_command=update-ca-trust extract\n[2018-09-21 08:17:25,261] (heat-config) [INFO] deploy_server_id=10bf651c-fd66-4074-9929-ddfdd495b40b\n[2018-09-21 08:17:25,261] (heat-config) [INFO] deploy_action=CREATE\n[2018-09-21 08:17:25,261] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-afojl4kgiunp-0-bqr7pcxdlsbr-NodeTLSCAData-2dpwgxrjbjpp-CADeployment-an7ehdlihqa6/5fde902e-a1bd-4fc0-864d-7d79f436df28\n[2018-09-21 08:17:25,261] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-09-21 08:17:25,261] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-09-21 08:17:25,261] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/9b827f38-6cee-4d67-875b-9cbad56ece91\n[2018-09-21 08:17:25,994] (heat-config) [INFO] \n[2018-09-21 08:17:25,994] (heat-config) [DEBUG] \n[2018-09-21 08:17:25,994] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/9b827f38-6cee-4d67-875b-9cbad56ece91\n\n[2018-09-21 08:17:25,998] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-09-21 08:17:25,999] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/9b827f38-6cee-4d67-875b-9cbad56ece91.json < /var/lib/heat-config/deployed/9b827f38-6cee-4d67-875b-9cbad56ece91.notify.json\n[2018-09-21 08:17:26,441] (heat-config) [INFO] \n[2018-09-21 08:17:26,441] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:17:25,235] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/9b827f38-6cee-4d67-875b-9cbad56ece91.json", "[2018-09-21 08:17:25,998] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"b2961e1bb192ef8842515ec0320ce302 /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", "[2018-09-21 08:17:25,998] (heat-config) [DEBUG] [2018-09-21 08:17:25,260] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", "[2018-09-21 08:17:25,260] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", "MIIDlzCCAn+gAwIBAgIJALY0EJYbjjVMMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", "ODA5MjExMTEzNDVaFw0xOTA5MjExMTEzNDVaMGIxCzAJBgNVBAYTAlVTMQswCQYD", "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", "BQADggEPADCCAQoCggEBAOloaqP9mv8rvusa+RWPmecsF0SNauqiXOiuSL4/d/aS", "GwqRNNoHRCqgQQaS71dNFhyAfk2eI9MpghGfgqOgNeHXvg2q38PLiPuWhUO8mIos", "AKh7O3Vtb2GZEHT+BTWUcfVHaidxuKA5ij3mYWk1CAZYsxmjh3iYcmZo+GnLIfru", "n2FfWXR7ftvLZARE25Kj+7MwjRi7BU2BiILU+0BOPJEZ29pVCyEOvq366jLsWiwR", "7izLQG8ZKZJIIBvjyz79u7GKPoaLZphyj37/fzX5C2cIlIdfEXXp1VjFKrfygThH", "Gspagw579hIqPt4ZoGqtiqRy4w7GPxGm6jXQPFnHa1ECAwEAAaNQME4wHQYDVR0O", "BBYEFJ4Fv94lhVAxsMKUAUQXw4Zn3ZavMB8GA1UdIwQYMBaAFJ4Fv94lhVAxsMKU", "AUQXw4Zn3ZavMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAGdpRYfj", "1WOBZPhBTSLDGIn0hQOyOW3+zakb+D4s68EQlZ7xZctD7Nt6kg1Xr9Jphxpc4R4w", "GTI61CGR8qW2gTTp2PgPBXxXY88Z+9OG64ODSstRzhi41/zE9lggOzGotGTrsoxh", "UNs9ACq2+Uem4GoD7790VY27wN3GFQEoj9lFTYl2mojIie8LEub1fJFIxXEpft3q", "HWkxqp418aMwlvlpoBnyXRyEdvarIxlbYdyh0Hy6xcb+q+JQC3glerMET1I8fs/M", "quGY47lurePIoIRZ3sm6UuIPluD/1xm89pvC3MPyyft/RxyPT2EUqIGLWLuUJxMR", "GKHRxeipfQuWl88=", "-----END CERTIFICATE-----", "[2018-09-21 08:17:25,260] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", "[2018-09-21 08:17:25,261] (heat-config) [INFO] deploy_server_id=10bf651c-fd66-4074-9929-ddfdd495b40b", "[2018-09-21 08:17:25,261] (heat-config) [INFO] deploy_action=CREATE", "[2018-09-21 08:17:25,261] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-afojl4kgiunp-0-bqr7pcxdlsbr-NodeTLSCAData-2dpwgxrjbjpp-CADeployment-an7ehdlihqa6/5fde902e-a1bd-4fc0-864d-7d79f436df28", "[2018-09-21 08:17:25,261] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-09-21 08:17:25,261] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-09-21 08:17:25,261] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/9b827f38-6cee-4d67-875b-9cbad56ece91", "[2018-09-21 08:17:25,994] (heat-config) [INFO] ", "[2018-09-21 08:17:25,994] (heat-config) [DEBUG] ", "[2018-09-21 08:17:25,994] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/9b827f38-6cee-4d67-875b-9cbad56ece91", "", "[2018-09-21 08:17:25,998] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-09-21 08:17:25,999] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/9b827f38-6cee-4d67-875b-9cbad56ece91.json < /var/lib/heat-config/deployed/9b827f38-6cee-4d67-875b-9cbad56ece91.notify.json", "[2018-09-21 08:17:26,441] (heat-config) [INFO] ", "[2018-09-21 08:17:26,441] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/0659dbc6-c804-42c2-be89-b145c79bea70.notify.json)", "delta": "0:00:01.223949", "end": "2018-09-21 08:17:26.439334", "rc": 0, "start": "2018-09-21 08:17:25.215385", "stderr": "[2018-09-21 08:17:25,244] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/0659dbc6-c804-42c2-be89-b145c79bea70.json\n[2018-09-21 08:17:26,000] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"b2961e1bb192ef8842515ec0320ce302 /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}\n[2018-09-21 08:17:26,000] (heat-config) [DEBUG] [2018-09-21 08:17:25,269] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem\n[2018-09-21 08:17:25,269] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----\nMIIDlzCCAn+gAwIBAgIJALY0EJYbjjVMMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV\nBAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH\nUmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x\nODA5MjExMTEzNDVaFw0xOTA5MjExMTEzNDVaMGIxCzAJBgNVBAYTAlVTMQswCQYD\nVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG\nA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBAOloaqP9mv8rvusa+RWPmecsF0SNauqiXOiuSL4/d/aS\nGwqRNNoHRCqgQQaS71dNFhyAfk2eI9MpghGfgqOgNeHXvg2q38PLiPuWhUO8mIos\nAKh7O3Vtb2GZEHT+BTWUcfVHaidxuKA5ij3mYWk1CAZYsxmjh3iYcmZo+GnLIfru\nn2FfWXR7ftvLZARE25Kj+7MwjRi7BU2BiILU+0BOPJEZ29pVCyEOvq366jLsWiwR\n7izLQG8ZKZJIIBvjyz79u7GKPoaLZphyj37/fzX5C2cIlIdfEXXp1VjFKrfygThH\nGspagw579hIqPt4ZoGqtiqRy4w7GPxGm6jXQPFnHa1ECAwEAAaNQME4wHQYDVR0O\nBBYEFJ4Fv94lhVAxsMKUAUQXw4Zn3ZavMB8GA1UdIwQYMBaAFJ4Fv94lhVAxsMKU\nAUQXw4Zn3ZavMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAGdpRYfj\n1WOBZPhBTSLDGIn0hQOyOW3+zakb+D4s68EQlZ7xZctD7Nt6kg1Xr9Jphxpc4R4w\nGTI61CGR8qW2gTTp2PgPBXxXY88Z+9OG64ODSstRzhi41/zE9lggOzGotGTrsoxh\nUNs9ACq2+Uem4GoD7790VY27wN3GFQEoj9lFTYl2mojIie8LEub1fJFIxXEpft3q\nHWkxqp418aMwlvlpoBnyXRyEdvarIxlbYdyh0Hy6xcb+q+JQC3glerMET1I8fs/M\nquGY47lurePIoIRZ3sm6UuIPluD/1xm89pvC3MPyyft/RxyPT2EUqIGLWLuUJxMR\nGKHRxeipfQuWl88=\n-----END CERTIFICATE-----\n[2018-09-21 08:17:25,269] (heat-config) [INFO] update_anchor_command=update-ca-trust extract\n[2018-09-21 08:17:25,269] (heat-config) [INFO] deploy_server_id=41df08ab-b98d-4a4e-b91d-2da74cba2af6\n[2018-09-21 08:17:25,269] (heat-config) [INFO] deploy_action=CREATE\n[2018-09-21 08:17:25,270] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-vwvs6vnygabp-0-3khh4rv6h5h3-NodeTLSCAData-2ebylm3m5hcg-CADeployment-erovjhlpgsfc/9a08cdec-2a22-4665-92b2-83f63a3c13e9\n[2018-09-21 08:17:25,270] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-09-21 08:17:25,270] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-09-21 08:17:25,270] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/0659dbc6-c804-42c2-be89-b145c79bea70\n[2018-09-21 08:17:25,995] (heat-config) [INFO] \n[2018-09-21 08:17:25,995] (heat-config) [DEBUG] \n[2018-09-21 08:17:25,995] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/0659dbc6-c804-42c2-be89-b145c79bea70\n\n[2018-09-21 08:17:26,000] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-09-21 08:17:26,000] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0659dbc6-c804-42c2-be89-b145c79bea70.json < /var/lib/heat-config/deployed/0659dbc6-c804-42c2-be89-b145c79bea70.notify.json\n[2018-09-21 08:17:26,432] (heat-config) [INFO] \n[2018-09-21 08:17:26,432] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:17:25,244] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/0659dbc6-c804-42c2-be89-b145c79bea70.json", "[2018-09-21 08:17:26,000] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"b2961e1bb192ef8842515ec0320ce302 /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", "[2018-09-21 08:17:26,000] (heat-config) [DEBUG] [2018-09-21 08:17:25,269] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", "[2018-09-21 08:17:25,269] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", "MIIDlzCCAn+gAwIBAgIJALY0EJYbjjVMMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", "ODA5MjExMTEzNDVaFw0xOTA5MjExMTEzNDVaMGIxCzAJBgNVBAYTAlVTMQswCQYD", "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", "BQADggEPADCCAQoCggEBAOloaqP9mv8rvusa+RWPmecsF0SNauqiXOiuSL4/d/aS", "GwqRNNoHRCqgQQaS71dNFhyAfk2eI9MpghGfgqOgNeHXvg2q38PLiPuWhUO8mIos", "AKh7O3Vtb2GZEHT+BTWUcfVHaidxuKA5ij3mYWk1CAZYsxmjh3iYcmZo+GnLIfru", "n2FfWXR7ftvLZARE25Kj+7MwjRi7BU2BiILU+0BOPJEZ29pVCyEOvq366jLsWiwR", "7izLQG8ZKZJIIBvjyz79u7GKPoaLZphyj37/fzX5C2cIlIdfEXXp1VjFKrfygThH", "Gspagw579hIqPt4ZoGqtiqRy4w7GPxGm6jXQPFnHa1ECAwEAAaNQME4wHQYDVR0O", "BBYEFJ4Fv94lhVAxsMKUAUQXw4Zn3ZavMB8GA1UdIwQYMBaAFJ4Fv94lhVAxsMKU", "AUQXw4Zn3ZavMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAGdpRYfj", "1WOBZPhBTSLDGIn0hQOyOW3+zakb+D4s68EQlZ7xZctD7Nt6kg1Xr9Jphxpc4R4w", "GTI61CGR8qW2gTTp2PgPBXxXY88Z+9OG64ODSstRzhi41/zE9lggOzGotGTrsoxh", "UNs9ACq2+Uem4GoD7790VY27wN3GFQEoj9lFTYl2mojIie8LEub1fJFIxXEpft3q", "HWkxqp418aMwlvlpoBnyXRyEdvarIxlbYdyh0Hy6xcb+q+JQC3glerMET1I8fs/M", "quGY47lurePIoIRZ3sm6UuIPluD/1xm89pvC3MPyyft/RxyPT2EUqIGLWLuUJxMR", "GKHRxeipfQuWl88=", "-----END CERTIFICATE-----", "[2018-09-21 08:17:25,269] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", "[2018-09-21 08:17:25,269] (heat-config) [INFO] deploy_server_id=41df08ab-b98d-4a4e-b91d-2da74cba2af6", "[2018-09-21 08:17:25,269] (heat-config) [INFO] deploy_action=CREATE", "[2018-09-21 08:17:25,270] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-vwvs6vnygabp-0-3khh4rv6h5h3-NodeTLSCAData-2ebylm3m5hcg-CADeployment-erovjhlpgsfc/9a08cdec-2a22-4665-92b2-83f63a3c13e9", "[2018-09-21 08:17:25,270] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-09-21 08:17:25,270] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-09-21 08:17:25,270] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/0659dbc6-c804-42c2-be89-b145c79bea70", "[2018-09-21 08:17:25,995] (heat-config) [INFO] ", "[2018-09-21 08:17:25,995] (heat-config) [DEBUG] ", "[2018-09-21 08:17:25,995] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/0659dbc6-c804-42c2-be89-b145c79bea70", "", "[2018-09-21 08:17:26,000] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-09-21 08:17:26,000] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0659dbc6-c804-42c2-be89-b145c79bea70.json < /var/lib/heat-config/deployed/0659dbc6-c804-42c2-be89-b145c79bea70.notify.json", "[2018-09-21 08:17:26,432] (heat-config) [INFO] ", "[2018-09-21 08:17:26,432] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > >TASK [Output for CADeployment] ************************************************* >Friday 21 September 2018 08:17:26 -0400 (0:00:01.486) 0:00:48.958 ****** >ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:17:25,235] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/9b827f38-6cee-4d67-875b-9cbad56ece91.json", > "[2018-09-21 08:17:25,998] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"b2961e1bb192ef8842515ec0320ce302 /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", > "[2018-09-21 08:17:25,998] (heat-config) [DEBUG] [2018-09-21 08:17:25,260] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", > "[2018-09-21 08:17:25,260] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", > "MIIDlzCCAn+gAwIBAgIJALY0EJYbjjVMMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", > "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", > "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", > "ODA5MjExMTEzNDVaFw0xOTA5MjExMTEzNDVaMGIxCzAJBgNVBAYTAlVTMQswCQYD", > "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", > "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", > "BQADggEPADCCAQoCggEBAOloaqP9mv8rvusa+RWPmecsF0SNauqiXOiuSL4/d/aS", > "GwqRNNoHRCqgQQaS71dNFhyAfk2eI9MpghGfgqOgNeHXvg2q38PLiPuWhUO8mIos", > "AKh7O3Vtb2GZEHT+BTWUcfVHaidxuKA5ij3mYWk1CAZYsxmjh3iYcmZo+GnLIfru", > "n2FfWXR7ftvLZARE25Kj+7MwjRi7BU2BiILU+0BOPJEZ29pVCyEOvq366jLsWiwR", > "7izLQG8ZKZJIIBvjyz79u7GKPoaLZphyj37/fzX5C2cIlIdfEXXp1VjFKrfygThH", > "Gspagw579hIqPt4ZoGqtiqRy4w7GPxGm6jXQPFnHa1ECAwEAAaNQME4wHQYDVR0O", > "BBYEFJ4Fv94lhVAxsMKUAUQXw4Zn3ZavMB8GA1UdIwQYMBaAFJ4Fv94lhVAxsMKU", > "AUQXw4Zn3ZavMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAGdpRYfj", > "1WOBZPhBTSLDGIn0hQOyOW3+zakb+D4s68EQlZ7xZctD7Nt6kg1Xr9Jphxpc4R4w", > "GTI61CGR8qW2gTTp2PgPBXxXY88Z+9OG64ODSstRzhi41/zE9lggOzGotGTrsoxh", > "UNs9ACq2+Uem4GoD7790VY27wN3GFQEoj9lFTYl2mojIie8LEub1fJFIxXEpft3q", > "HWkxqp418aMwlvlpoBnyXRyEdvarIxlbYdyh0Hy6xcb+q+JQC3glerMET1I8fs/M", > "quGY47lurePIoIRZ3sm6UuIPluD/1xm89pvC3MPyyft/RxyPT2EUqIGLWLuUJxMR", > "GKHRxeipfQuWl88=", > "-----END CERTIFICATE-----", > "[2018-09-21 08:17:25,260] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", > "[2018-09-21 08:17:25,261] (heat-config) [INFO] deploy_server_id=10bf651c-fd66-4074-9929-ddfdd495b40b", > "[2018-09-21 08:17:25,261] (heat-config) [INFO] deploy_action=CREATE", > "[2018-09-21 08:17:25,261] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-afojl4kgiunp-0-bqr7pcxdlsbr-NodeTLSCAData-2dpwgxrjbjpp-CADeployment-an7ehdlihqa6/5fde902e-a1bd-4fc0-864d-7d79f436df28", > "[2018-09-21 08:17:25,261] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-09-21 08:17:25,261] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-09-21 08:17:25,261] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/9b827f38-6cee-4d67-875b-9cbad56ece91", > "[2018-09-21 08:17:25,994] (heat-config) [INFO] ", > "[2018-09-21 08:17:25,994] (heat-config) [DEBUG] ", > "[2018-09-21 08:17:25,994] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/9b827f38-6cee-4d67-875b-9cbad56ece91", > "", > "[2018-09-21 08:17:25,998] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-09-21 08:17:25,999] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/9b827f38-6cee-4d67-875b-9cbad56ece91.json < /var/lib/heat-config/deployed/9b827f38-6cee-4d67-875b-9cbad56ece91.notify.json", > "[2018-09-21 08:17:26,441] (heat-config) [INFO] ", > "[2018-09-21 08:17:26,441] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:17:25,244] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/0659dbc6-c804-42c2-be89-b145c79bea70.json", > "[2018-09-21 08:17:26,000] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"b2961e1bb192ef8842515ec0320ce302 /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", > "[2018-09-21 08:17:26,000] (heat-config) [DEBUG] [2018-09-21 08:17:25,269] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", > "[2018-09-21 08:17:25,269] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", > "MIIDlzCCAn+gAwIBAgIJALY0EJYbjjVMMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", > "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", > "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", > "ODA5MjExMTEzNDVaFw0xOTA5MjExMTEzNDVaMGIxCzAJBgNVBAYTAlVTMQswCQYD", > "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", > "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", > "BQADggEPADCCAQoCggEBAOloaqP9mv8rvusa+RWPmecsF0SNauqiXOiuSL4/d/aS", > "GwqRNNoHRCqgQQaS71dNFhyAfk2eI9MpghGfgqOgNeHXvg2q38PLiPuWhUO8mIos", > "AKh7O3Vtb2GZEHT+BTWUcfVHaidxuKA5ij3mYWk1CAZYsxmjh3iYcmZo+GnLIfru", > "n2FfWXR7ftvLZARE25Kj+7MwjRi7BU2BiILU+0BOPJEZ29pVCyEOvq366jLsWiwR", > "7izLQG8ZKZJIIBvjyz79u7GKPoaLZphyj37/fzX5C2cIlIdfEXXp1VjFKrfygThH", > "Gspagw579hIqPt4ZoGqtiqRy4w7GPxGm6jXQPFnHa1ECAwEAAaNQME4wHQYDVR0O", > "BBYEFJ4Fv94lhVAxsMKUAUQXw4Zn3ZavMB8GA1UdIwQYMBaAFJ4Fv94lhVAxsMKU", > "AUQXw4Zn3ZavMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAGdpRYfj", > "1WOBZPhBTSLDGIn0hQOyOW3+zakb+D4s68EQlZ7xZctD7Nt6kg1Xr9Jphxpc4R4w", > "GTI61CGR8qW2gTTp2PgPBXxXY88Z+9OG64ODSstRzhi41/zE9lggOzGotGTrsoxh", > "UNs9ACq2+Uem4GoD7790VY27wN3GFQEoj9lFTYl2mojIie8LEub1fJFIxXEpft3q", > "HWkxqp418aMwlvlpoBnyXRyEdvarIxlbYdyh0Hy6xcb+q+JQC3glerMET1I8fs/M", > "quGY47lurePIoIRZ3sm6UuIPluD/1xm89pvC3MPyyft/RxyPT2EUqIGLWLuUJxMR", > "GKHRxeipfQuWl88=", > "-----END CERTIFICATE-----", > "[2018-09-21 08:17:25,269] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", > "[2018-09-21 08:17:25,269] (heat-config) [INFO] deploy_server_id=41df08ab-b98d-4a4e-b91d-2da74cba2af6", > "[2018-09-21 08:17:25,269] (heat-config) [INFO] deploy_action=CREATE", > "[2018-09-21 08:17:25,270] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-vwvs6vnygabp-0-3khh4rv6h5h3-NodeTLSCAData-2ebylm3m5hcg-CADeployment-erovjhlpgsfc/9a08cdec-2a22-4665-92b2-83f63a3c13e9", > "[2018-09-21 08:17:25,270] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-09-21 08:17:25,270] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-09-21 08:17:25,270] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/0659dbc6-c804-42c2-be89-b145c79bea70", > "[2018-09-21 08:17:25,995] (heat-config) [INFO] ", > "[2018-09-21 08:17:25,995] (heat-config) [DEBUG] ", > "[2018-09-21 08:17:25,995] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/0659dbc6-c804-42c2-be89-b145c79bea70", > "", > "[2018-09-21 08:17:26,000] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-09-21 08:17:26,000] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0659dbc6-c804-42c2-be89-b145c79bea70.json < /var/lib/heat-config/deployed/0659dbc6-c804-42c2-be89-b145c79bea70.notify.json", > "[2018-09-21 08:17:26,432] (heat-config) [INFO] ", > "[2018-09-21 08:17:26,432] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:17:24,201] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/8b37423a-dee6-432c-be57-782cbeabc6ac.json", > "[2018-09-21 08:17:24,889] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"b2961e1bb192ef8842515ec0320ce302 /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", > "[2018-09-21 08:17:24,890] (heat-config) [DEBUG] [2018-09-21 08:17:24,228] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", > "[2018-09-21 08:17:24,228] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", > "MIIDlzCCAn+gAwIBAgIJALY0EJYbjjVMMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", > "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", > "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", > "ODA5MjExMTEzNDVaFw0xOTA5MjExMTEzNDVaMGIxCzAJBgNVBAYTAlVTMQswCQYD", > "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", > "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", > "BQADggEPADCCAQoCggEBAOloaqP9mv8rvusa+RWPmecsF0SNauqiXOiuSL4/d/aS", > "GwqRNNoHRCqgQQaS71dNFhyAfk2eI9MpghGfgqOgNeHXvg2q38PLiPuWhUO8mIos", > "AKh7O3Vtb2GZEHT+BTWUcfVHaidxuKA5ij3mYWk1CAZYsxmjh3iYcmZo+GnLIfru", > "n2FfWXR7ftvLZARE25Kj+7MwjRi7BU2BiILU+0BOPJEZ29pVCyEOvq366jLsWiwR", > "7izLQG8ZKZJIIBvjyz79u7GKPoaLZphyj37/fzX5C2cIlIdfEXXp1VjFKrfygThH", > "Gspagw579hIqPt4ZoGqtiqRy4w7GPxGm6jXQPFnHa1ECAwEAAaNQME4wHQYDVR0O", > "BBYEFJ4Fv94lhVAxsMKUAUQXw4Zn3ZavMB8GA1UdIwQYMBaAFJ4Fv94lhVAxsMKU", > "AUQXw4Zn3ZavMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAGdpRYfj", > "1WOBZPhBTSLDGIn0hQOyOW3+zakb+D4s68EQlZ7xZctD7Nt6kg1Xr9Jphxpc4R4w", > "GTI61CGR8qW2gTTp2PgPBXxXY88Z+9OG64ODSstRzhi41/zE9lggOzGotGTrsoxh", > "UNs9ACq2+Uem4GoD7790VY27wN3GFQEoj9lFTYl2mojIie8LEub1fJFIxXEpft3q", > "HWkxqp418aMwlvlpoBnyXRyEdvarIxlbYdyh0Hy6xcb+q+JQC3glerMET1I8fs/M", > "quGY47lurePIoIRZ3sm6UuIPluD/1xm89pvC3MPyyft/RxyPT2EUqIGLWLuUJxMR", > "GKHRxeipfQuWl88=", > "-----END CERTIFICATE-----", > "[2018-09-21 08:17:24,228] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", > "[2018-09-21 08:17:24,228] (heat-config) [INFO] deploy_server_id=f4c5361d-7430-47e6-b3a8-908850a79a79", > "[2018-09-21 08:17:24,228] (heat-config) [INFO] deploy_action=CREATE", > "[2018-09-21 08:17:24,228] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-bre5qklthc6f-0-gxpifqih23ze-NodeTLSCAData-gzmyad5fe3rx-CADeployment-hmdt2qxcrpiw/a47e42b3-995c-4492-a546-884b32e4e80e", > "[2018-09-21 08:17:24,228] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-09-21 08:17:24,228] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-09-21 08:17:24,229] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/8b37423a-dee6-432c-be57-782cbeabc6ac", > "[2018-09-21 08:17:24,885] (heat-config) [INFO] ", > "[2018-09-21 08:17:24,886] (heat-config) [DEBUG] ", > "[2018-09-21 08:17:24,886] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/8b37423a-dee6-432c-be57-782cbeabc6ac", > "", > "[2018-09-21 08:17:24,890] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-09-21 08:17:24,890] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/8b37423a-dee6-432c-be57-782cbeabc6ac.json < /var/lib/heat-config/deployed/8b37423a-dee6-432c-be57-782cbeabc6ac.notify.json", > "[2018-09-21 08:17:25,250] (heat-config) [INFO] ", > "[2018-09-21 08:17:25,250] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment CADeployment] ****************************** >Friday 21 September 2018 08:17:26 -0400 (0:00:00.185) 0:00:49.144 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:17:26 -0400 (0:00:00.088) 0:00:49.233 ****** >ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "a62f0534-f5c7-4b70-b27c-cfc15ab037e5"}, "changed": false} > >TASK [Render deployment file for ControllerDeployment] ************************* >Friday 21 September 2018 08:17:27 -0400 (0:00:00.433) 0:00:49.666 ****** >changed: [controller-0] => {"changed": true, "checksum": "a00d72174a5661ba363fcc3b9b7ce162c909d97d", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerDeployment-a62f0534-f5c7-4b70-b27c-cfc15ab037e5", "gid": 0, "group": "root", "md5sum": "ec14f6e7b82bb25ec4f3af2082c2399a", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 73836, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532247.6-169950352283444/source", "state": "file", "uid": 0} > >TASK [Check if deployed file exists for ControllerDeployment] ****************** >Friday 21 September 2018 08:17:28 -0400 (0:00:00.989) 0:00:50.656 ****** >ok: [controller-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for ControllerDeployment] ******************* >Friday 21 September 2018 08:17:28 -0400 (0:00:00.237) 0:00:50.893 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for ControllerDeployment when previous deployment failed] *** >Friday 21 September 2018 08:17:28 -0400 (0:00:00.050) 0:00:50.944 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for ControllerDeployment] ********************* >Friday 21 September 2018 08:17:28 -0400 (0:00:00.047) 0:00:50.992 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment ControllerDeployment] ************************************* >Friday 21 September 2018 08:17:28 -0400 (0:00:00.046) 0:00:51.039 ****** >changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/a62f0534-f5c7-4b70-b27c-cfc15ab037e5.notify.json)", "delta": "0:00:00.644853", "end": "2018-09-21 08:17:29.415002", "rc": 0, "start": "2018-09-21 08:17:28.770149", "stderr": "[2018-09-21 08:17:28,805] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/a62f0534-f5c7-4b70-b27c-cfc15ab037e5.json\n[2018-09-21 08:17:28,960] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-09-21 08:17:28,960] (heat-config) [DEBUG] \n[2018-09-21 08:17:28,960] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-09-21 08:17:28,960] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a62f0534-f5c7-4b70-b27c-cfc15ab037e5.json < /var/lib/heat-config/deployed/a62f0534-f5c7-4b70-b27c-cfc15ab037e5.notify.json\n[2018-09-21 08:17:29,407] (heat-config) [INFO] \n[2018-09-21 08:17:29,408] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:17:28,805] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/a62f0534-f5c7-4b70-b27c-cfc15ab037e5.json", "[2018-09-21 08:17:28,960] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-09-21 08:17:28,960] (heat-config) [DEBUG] ", "[2018-09-21 08:17:28,960] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-09-21 08:17:28,960] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a62f0534-f5c7-4b70-b27c-cfc15ab037e5.json < /var/lib/heat-config/deployed/a62f0534-f5c7-4b70-b27c-cfc15ab037e5.notify.json", "[2018-09-21 08:17:29,407] (heat-config) [INFO] ", "[2018-09-21 08:17:29,408] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > >TASK [Output for ControllerDeployment] ***************************************** >Friday 21 September 2018 08:17:29 -0400 (0:00:00.882) 0:00:51.921 ****** >ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:17:28,805] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/a62f0534-f5c7-4b70-b27c-cfc15ab037e5.json", > "[2018-09-21 08:17:28,960] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-09-21 08:17:28,960] (heat-config) [DEBUG] ", > "[2018-09-21 08:17:28,960] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-09-21 08:17:28,960] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a62f0534-f5c7-4b70-b27c-cfc15ab037e5.json < /var/lib/heat-config/deployed/a62f0534-f5c7-4b70-b27c-cfc15ab037e5.notify.json", > "[2018-09-21 08:17:29,407] (heat-config) [INFO] ", > "[2018-09-21 08:17:29,408] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment ControllerDeployment] ********************** >Friday 21 September 2018 08:17:29 -0400 (0:00:00.096) 0:00:52.018 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:17:29 -0400 (0:00:00.044) 0:00:52.063 ****** >ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "82d05ffe-4599-4556-8a01-1bfd5c53e3cd"}, "changed": false} > >TASK [Render deployment file for ControllerHostsDeployment] ******************** >Friday 21 September 2018 08:17:29 -0400 (0:00:00.102) 0:00:52.165 ****** >changed: [controller-0] => {"changed": true, "checksum": "f07263bdd24c3e9859c1827ac88c0f4835542fca", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerHostsDeployment-82d05ffe-4599-4556-8a01-1bfd5c53e3cd", "gid": 0, "group": "root", "md5sum": "ffd408243a711e9691a3ac0ffe219700", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4425, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532249.74-62276500635560/source", "state": "file", "uid": 0} > >TASK [Check if deployed file exists for ControllerHostsDeployment] ************* >Friday 21 September 2018 08:17:30 -0400 (0:00:00.588) 0:00:52.753 ****** >ok: [controller-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for ControllerHostsDeployment] ************** >Friday 21 September 2018 08:17:30 -0400 (0:00:00.234) 0:00:52.987 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for ControllerHostsDeployment when previous deployment failed] *** >Friday 21 September 2018 08:17:30 -0400 (0:00:00.052) 0:00:53.040 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for ControllerHostsDeployment] **************** >Friday 21 September 2018 08:17:30 -0400 (0:00:00.049) 0:00:53.089 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment ControllerHostsDeployment] ******************************** >Friday 21 September 2018 08:17:30 -0400 (0:00:00.047) 0:00:53.136 ****** >changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/82d05ffe-4599-4556-8a01-1bfd5c53e3cd.notify.json)", "delta": "0:00:00.470467", "end": "2018-09-21 08:17:31.339255", "rc": 0, "start": "2018-09-21 08:17:30.868788", "stderr": "[2018-09-21 08:17:30,896] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/82d05ffe-4599-4556-8a01-1bfd5c53e3cd.json\n[2018-09-21 08:17:30,951] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-09-21 08:17:30,951] (heat-config) [DEBUG] [2018-09-21 08:17:30,918] (heat-config) [INFO] hosts=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-09-21 08:17:30,918] (heat-config) [INFO] deploy_server_id=10bf651c-fd66-4074-9929-ddfdd495b40b\n[2018-09-21 08:17:30,918] (heat-config) [INFO] deploy_action=CREATE\n[2018-09-21 08:17:30,918] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-iz7hxw3irwoi-0-gwfule6v2gdl/637dfda7-fe2f-4260-838f-5bd9cc2b3be3\n[2018-09-21 08:17:30,918] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-09-21 08:17:30,918] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-09-21 08:17:30,918] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/82d05ffe-4599-4556-8a01-1bfd5c53e3cd\n[2018-09-21 08:17:30,946] (heat-config) [INFO] \n[2018-09-21 08:17:30,947] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n++ hostname -s\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n++ hostname -s\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n++ hostname -s\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n++ hostname -s\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n++ hostname -s\n+ sed -i /controller-0/d /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-09-21 08:17:30,947] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/82d05ffe-4599-4556-8a01-1bfd5c53e3cd\n\n[2018-09-21 08:17:30,951] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-09-21 08:17:30,952] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/82d05ffe-4599-4556-8a01-1bfd5c53e3cd.json < /var/lib/heat-config/deployed/82d05ffe-4599-4556-8a01-1bfd5c53e3cd.notify.json\n[2018-09-21 08:17:31,332] (heat-config) [INFO] \n[2018-09-21 08:17:31,332] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:17:30,896] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/82d05ffe-4599-4556-8a01-1bfd5c53e3cd.json", "[2018-09-21 08:17:30,951] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-09-21 08:17:30,951] (heat-config) [DEBUG] [2018-09-21 08:17:30,918] (heat-config) [INFO] hosts=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-09-21 08:17:30,918] (heat-config) [INFO] deploy_server_id=10bf651c-fd66-4074-9929-ddfdd495b40b", "[2018-09-21 08:17:30,918] (heat-config) [INFO] deploy_action=CREATE", "[2018-09-21 08:17:30,918] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-iz7hxw3irwoi-0-gwfule6v2gdl/637dfda7-fe2f-4260-838f-5bd9cc2b3be3", "[2018-09-21 08:17:30,918] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-09-21 08:17:30,918] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-09-21 08:17:30,918] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/82d05ffe-4599-4556-8a01-1bfd5c53e3cd", "[2018-09-21 08:17:30,946] (heat-config) [INFO] ", "[2018-09-21 08:17:30,947] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "++ hostname -s", "+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "++ hostname -s", "+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "++ hostname -s", "+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "++ hostname -s", "+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "++ hostname -s", "+ sed -i /controller-0/d /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-09-21 08:17:30,947] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/82d05ffe-4599-4556-8a01-1bfd5c53e3cd", "", "[2018-09-21 08:17:30,951] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-09-21 08:17:30,952] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/82d05ffe-4599-4556-8a01-1bfd5c53e3cd.json < /var/lib/heat-config/deployed/82d05ffe-4599-4556-8a01-1bfd5c53e3cd.notify.json", "[2018-09-21 08:17:31,332] (heat-config) [INFO] ", "[2018-09-21 08:17:31,332] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > >TASK [Output for ControllerHostsDeployment] ************************************ >Friday 21 September 2018 08:17:31 -0400 (0:00:00.752) 0:00:53.889 ****** >ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:17:30,896] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/82d05ffe-4599-4556-8a01-1bfd5c53e3cd.json", > "[2018-09-21 08:17:30,951] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-09-21 08:17:30,951] (heat-config) [DEBUG] [2018-09-21 08:17:30,918] (heat-config) [INFO] hosts=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-09-21 08:17:30,918] (heat-config) [INFO] deploy_server_id=10bf651c-fd66-4074-9929-ddfdd495b40b", > "[2018-09-21 08:17:30,918] (heat-config) [INFO] deploy_action=CREATE", > "[2018-09-21 08:17:30,918] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-iz7hxw3irwoi-0-gwfule6v2gdl/637dfda7-fe2f-4260-838f-5bd9cc2b3be3", > "[2018-09-21 08:17:30,918] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-09-21 08:17:30,918] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-09-21 08:17:30,918] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/82d05ffe-4599-4556-8a01-1bfd5c53e3cd", > "[2018-09-21 08:17:30,946] (heat-config) [INFO] ", > "[2018-09-21 08:17:30,947] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-09-21 08:17:30,947] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/82d05ffe-4599-4556-8a01-1bfd5c53e3cd", > "", > "[2018-09-21 08:17:30,951] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-09-21 08:17:30,952] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/82d05ffe-4599-4556-8a01-1bfd5c53e3cd.json < /var/lib/heat-config/deployed/82d05ffe-4599-4556-8a01-1bfd5c53e3cd.notify.json", > "[2018-09-21 08:17:31,332] (heat-config) [INFO] ", > "[2018-09-21 08:17:31,332] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment ControllerHostsDeployment] ***************** >Friday 21 September 2018 08:17:31 -0400 (0:00:00.116) 0:00:54.005 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:17:31 -0400 (0:00:00.037) 0:00:54.043 ****** >ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "2587a15a-c828-4c55-bf20-d418410e7721"}, "changed": false} > >TASK [Render deployment file for ControllerAllNodesDeployment] ***************** >Friday 21 September 2018 08:17:31 -0400 (0:00:00.177) 0:00:54.221 ****** >changed: [controller-0] => {"changed": true, "checksum": "cb595a3054ca1f842f4bf6791a98acfdc6339e29", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerAllNodesDeployment-2587a15a-c828-4c55-bf20-d418410e7721", "gid": 0, "group": "root", "md5sum": "9144f768df4d4ad871d32178787aa5fc", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19544, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532251.91-62163830009014/source", "state": "file", "uid": 0} > >TASK [Check if deployed file exists for ControllerAllNodesDeployment] ********** >Friday 21 September 2018 08:17:32 -0400 (0:00:00.709) 0:00:54.930 ****** >ok: [controller-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for ControllerAllNodesDeployment] *********** >Friday 21 September 2018 08:17:32 -0400 (0:00:00.228) 0:00:55.159 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for ControllerAllNodesDeployment when previous deployment failed] *** >Friday 21 September 2018 08:17:32 -0400 (0:00:00.051) 0:00:55.211 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for ControllerAllNodesDeployment] ************* >Friday 21 September 2018 08:17:32 -0400 (0:00:00.047) 0:00:55.258 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment ControllerAllNodesDeployment] ***************************** >Friday 21 September 2018 08:17:32 -0400 (0:00:00.044) 0:00:55.302 ****** >changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/2587a15a-c828-4c55-bf20-d418410e7721.notify.json)", "delta": "0:00:00.573291", "end": "2018-09-21 08:17:33.604129", "rc": 0, "start": "2018-09-21 08:17:33.030838", "stderr": "[2018-09-21 08:17:33,060] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/2587a15a-c828-4c55-bf20-d418410e7721.json\n[2018-09-21 08:17:33,192] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-09-21 08:17:33,192] (heat-config) [DEBUG] \n[2018-09-21 08:17:33,193] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-09-21 08:17:33,193] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/2587a15a-c828-4c55-bf20-d418410e7721.json < /var/lib/heat-config/deployed/2587a15a-c828-4c55-bf20-d418410e7721.notify.json\n[2018-09-21 08:17:33,597] (heat-config) [INFO] \n[2018-09-21 08:17:33,597] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:17:33,060] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/2587a15a-c828-4c55-bf20-d418410e7721.json", "[2018-09-21 08:17:33,192] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-09-21 08:17:33,192] (heat-config) [DEBUG] ", "[2018-09-21 08:17:33,193] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-09-21 08:17:33,193] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/2587a15a-c828-4c55-bf20-d418410e7721.json < /var/lib/heat-config/deployed/2587a15a-c828-4c55-bf20-d418410e7721.notify.json", "[2018-09-21 08:17:33,597] (heat-config) [INFO] ", "[2018-09-21 08:17:33,597] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > >TASK [Output for ControllerAllNodesDeployment] ********************************* >Friday 21 September 2018 08:17:33 -0400 (0:00:00.804) 0:00:56.107 ****** >ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:17:33,060] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/2587a15a-c828-4c55-bf20-d418410e7721.json", > "[2018-09-21 08:17:33,192] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-09-21 08:17:33,192] (heat-config) [DEBUG] ", > "[2018-09-21 08:17:33,193] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-09-21 08:17:33,193] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/2587a15a-c828-4c55-bf20-d418410e7721.json < /var/lib/heat-config/deployed/2587a15a-c828-4c55-bf20-d418410e7721.notify.json", > "[2018-09-21 08:17:33,597] (heat-config) [INFO] ", > "[2018-09-21 08:17:33,597] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment ControllerAllNodesDeployment] ************** >Friday 21 September 2018 08:17:33 -0400 (0:00:00.082) 0:00:56.190 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:17:33 -0400 (0:00:00.038) 0:00:56.228 ****** >ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "f417ec73-d0c6-4033-a55d-ff5e489d7c22"}, "changed": false} > >TASK [Render deployment file for ControllerAllNodesValidationDeployment] ******* >Friday 21 September 2018 08:17:33 -0400 (0:00:00.086) 0:00:56.315 ****** >changed: [controller-0] => {"changed": true, "checksum": "40ba1679d6ca0d52dca11360bf92320dd6886fa5", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerAllNodesValidationDeployment-f417ec73-d0c6-4033-a55d-ff5e489d7c22", "gid": 0, "group": "root", "md5sum": "ea0cf05fda7bad2084f7922cfca31e5f", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4941, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532253.89-162851279274219/source", "state": "file", "uid": 0} > >TASK [Check if deployed file exists for ControllerAllNodesValidationDeployment] *** >Friday 21 September 2018 08:17:34 -0400 (0:00:00.565) 0:00:56.881 ****** >ok: [controller-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for ControllerAllNodesValidationDeployment] *** >Friday 21 September 2018 08:17:34 -0400 (0:00:00.230) 0:00:57.111 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for ControllerAllNodesValidationDeployment when previous deployment failed] *** >Friday 21 September 2018 08:17:34 -0400 (0:00:00.048) 0:00:57.159 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for ControllerAllNodesValidationDeployment] *** >Friday 21 September 2018 08:17:34 -0400 (0:00:00.051) 0:00:57.210 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment ControllerAllNodesValidationDeployment] ******************* >Friday 21 September 2018 08:17:34 -0400 (0:00:00.045) 0:00:57.256 ****** >changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/f417ec73-d0c6-4033-a55d-ff5e489d7c22.notify.json)", "delta": "0:00:01.238716", "end": "2018-09-21 08:17:36.224395", "rc": 0, "start": "2018-09-21 08:17:34.985679", "stderr": "[2018-09-21 08:17:35,013] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f417ec73-d0c6-4033-a55d-ff5e489d7c22.json\n[2018-09-21 08:17:35,791] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.117 for local network 10.0.0.0/24.\\nPing to 10.0.0.117 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.17 for local network 172.17.1.0/24.\\nPing to 172.17.1.17 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.22 for local network 172.17.2.0/24.\\nPing to 172.17.2.22 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.16 for local network 172.17.3.0/24.\\nPing to 172.17.3.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.14 for local network 172.17.4.0/24.\\nPing to 172.17.4.14 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.18 for local network 192.168.24.0/24.\\nPing to 192.168.24.18 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-09-21 08:17:35,792] (heat-config) [DEBUG] [2018-09-21 08:17:35,038] (heat-config) [INFO] ping_test_ips=172.17.3.16 172.17.4.14 172.17.1.17 172.17.2.22 10.0.0.117 192.168.24.18\n[2018-09-21 08:17:35,039] (heat-config) [INFO] validate_fqdn=False\n[2018-09-21 08:17:35,039] (heat-config) [INFO] validate_ntp=True\n[2018-09-21 08:17:35,039] (heat-config) [INFO] deploy_server_id=10bf651c-fd66-4074-9929-ddfdd495b40b\n[2018-09-21 08:17:35,039] (heat-config) [INFO] deploy_action=CREATE\n[2018-09-21 08:17:35,039] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-bmn32qp3esdo-0-42kc4bman7fh/e508c96e-0184-41a1-b1b1-44bd909bd7a4\n[2018-09-21 08:17:35,039] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-09-21 08:17:35,039] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-09-21 08:17:35,039] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f417ec73-d0c6-4033-a55d-ff5e489d7c22\n[2018-09-21 08:17:35,787] (heat-config) [INFO] Trying to ping 10.0.0.117 for local network 10.0.0.0/24.\nPing to 10.0.0.117 succeeded.\nSUCCESS\nTrying to ping 172.17.1.17 for local network 172.17.1.0/24.\nPing to 172.17.1.17 succeeded.\nSUCCESS\nTrying to ping 172.17.2.22 for local network 172.17.2.0/24.\nPing to 172.17.2.22 succeeded.\nSUCCESS\nTrying to ping 172.17.3.16 for local network 172.17.3.0/24.\nPing to 172.17.3.16 succeeded.\nSUCCESS\nTrying to ping 172.17.4.14 for local network 172.17.4.0/24.\nPing to 172.17.4.14 succeeded.\nSUCCESS\nTrying to ping 192.168.24.18 for local network 192.168.24.0/24.\nPing to 192.168.24.18 succeeded.\nSUCCESS\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\nSUCCESS\n\n[2018-09-21 08:17:35,787] (heat-config) [DEBUG] \n[2018-09-21 08:17:35,787] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f417ec73-d0c6-4033-a55d-ff5e489d7c22\n\n[2018-09-21 08:17:35,792] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-09-21 08:17:35,792] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f417ec73-d0c6-4033-a55d-ff5e489d7c22.json < /var/lib/heat-config/deployed/f417ec73-d0c6-4033-a55d-ff5e489d7c22.notify.json\n[2018-09-21 08:17:36,218] (heat-config) [INFO] \n[2018-09-21 08:17:36,218] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:17:35,013] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f417ec73-d0c6-4033-a55d-ff5e489d7c22.json", "[2018-09-21 08:17:35,791] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.117 for local network 10.0.0.0/24.\\nPing to 10.0.0.117 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.17 for local network 172.17.1.0/24.\\nPing to 172.17.1.17 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.22 for local network 172.17.2.0/24.\\nPing to 172.17.2.22 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.16 for local network 172.17.3.0/24.\\nPing to 172.17.3.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.14 for local network 172.17.4.0/24.\\nPing to 172.17.4.14 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.18 for local network 192.168.24.0/24.\\nPing to 192.168.24.18 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-09-21 08:17:35,792] (heat-config) [DEBUG] [2018-09-21 08:17:35,038] (heat-config) [INFO] ping_test_ips=172.17.3.16 172.17.4.14 172.17.1.17 172.17.2.22 10.0.0.117 192.168.24.18", "[2018-09-21 08:17:35,039] (heat-config) [INFO] validate_fqdn=False", "[2018-09-21 08:17:35,039] (heat-config) [INFO] validate_ntp=True", "[2018-09-21 08:17:35,039] (heat-config) [INFO] deploy_server_id=10bf651c-fd66-4074-9929-ddfdd495b40b", "[2018-09-21 08:17:35,039] (heat-config) [INFO] deploy_action=CREATE", "[2018-09-21 08:17:35,039] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-bmn32qp3esdo-0-42kc4bman7fh/e508c96e-0184-41a1-b1b1-44bd909bd7a4", "[2018-09-21 08:17:35,039] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-09-21 08:17:35,039] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-09-21 08:17:35,039] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f417ec73-d0c6-4033-a55d-ff5e489d7c22", "[2018-09-21 08:17:35,787] (heat-config) [INFO] Trying to ping 10.0.0.117 for local network 10.0.0.0/24.", "Ping to 10.0.0.117 succeeded.", "SUCCESS", "Trying to ping 172.17.1.17 for local network 172.17.1.0/24.", "Ping to 172.17.1.17 succeeded.", "SUCCESS", "Trying to ping 172.17.2.22 for local network 172.17.2.0/24.", "Ping to 172.17.2.22 succeeded.", "SUCCESS", "Trying to ping 172.17.3.16 for local network 172.17.3.0/24.", "Ping to 172.17.3.16 succeeded.", "SUCCESS", "Trying to ping 172.17.4.14 for local network 172.17.4.0/24.", "Ping to 172.17.4.14 succeeded.", "SUCCESS", "Trying to ping 192.168.24.18 for local network 192.168.24.0/24.", "Ping to 192.168.24.18 succeeded.", "SUCCESS", "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", "SUCCESS", "", "[2018-09-21 08:17:35,787] (heat-config) [DEBUG] ", "[2018-09-21 08:17:35,787] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f417ec73-d0c6-4033-a55d-ff5e489d7c22", "", "[2018-09-21 08:17:35,792] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-09-21 08:17:35,792] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f417ec73-d0c6-4033-a55d-ff5e489d7c22.json < /var/lib/heat-config/deployed/f417ec73-d0c6-4033-a55d-ff5e489d7c22.notify.json", "[2018-09-21 08:17:36,218] (heat-config) [INFO] ", "[2018-09-21 08:17:36,218] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > >TASK [Output for ControllerAllNodesValidationDeployment] *********************** >Friday 21 September 2018 08:17:36 -0400 (0:00:01.471) 0:00:58.727 ****** >ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:17:35,013] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f417ec73-d0c6-4033-a55d-ff5e489d7c22.json", > "[2018-09-21 08:17:35,791] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.117 for local network 10.0.0.0/24.\\nPing to 10.0.0.117 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.17 for local network 172.17.1.0/24.\\nPing to 172.17.1.17 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.22 for local network 172.17.2.0/24.\\nPing to 172.17.2.22 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.16 for local network 172.17.3.0/24.\\nPing to 172.17.3.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.14 for local network 172.17.4.0/24.\\nPing to 172.17.4.14 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.18 for local network 192.168.24.0/24.\\nPing to 192.168.24.18 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-09-21 08:17:35,792] (heat-config) [DEBUG] [2018-09-21 08:17:35,038] (heat-config) [INFO] ping_test_ips=172.17.3.16 172.17.4.14 172.17.1.17 172.17.2.22 10.0.0.117 192.168.24.18", > "[2018-09-21 08:17:35,039] (heat-config) [INFO] validate_fqdn=False", > "[2018-09-21 08:17:35,039] (heat-config) [INFO] validate_ntp=True", > "[2018-09-21 08:17:35,039] (heat-config) [INFO] deploy_server_id=10bf651c-fd66-4074-9929-ddfdd495b40b", > "[2018-09-21 08:17:35,039] (heat-config) [INFO] deploy_action=CREATE", > "[2018-09-21 08:17:35,039] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-bmn32qp3esdo-0-42kc4bman7fh/e508c96e-0184-41a1-b1b1-44bd909bd7a4", > "[2018-09-21 08:17:35,039] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-09-21 08:17:35,039] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-09-21 08:17:35,039] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f417ec73-d0c6-4033-a55d-ff5e489d7c22", > "[2018-09-21 08:17:35,787] (heat-config) [INFO] Trying to ping 10.0.0.117 for local network 10.0.0.0/24.", > "Ping to 10.0.0.117 succeeded.", > "SUCCESS", > "Trying to ping 172.17.1.17 for local network 172.17.1.0/24.", > "Ping to 172.17.1.17 succeeded.", > "SUCCESS", > "Trying to ping 172.17.2.22 for local network 172.17.2.0/24.", > "Ping to 172.17.2.22 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.16 for local network 172.17.3.0/24.", > "Ping to 172.17.3.16 succeeded.", > "SUCCESS", > "Trying to ping 172.17.4.14 for local network 172.17.4.0/24.", > "Ping to 172.17.4.14 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.18 for local network 192.168.24.0/24.", > "Ping to 192.168.24.18 succeeded.", > "SUCCESS", > "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", > "SUCCESS", > "", > "[2018-09-21 08:17:35,787] (heat-config) [DEBUG] ", > "[2018-09-21 08:17:35,787] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f417ec73-d0c6-4033-a55d-ff5e489d7c22", > "", > "[2018-09-21 08:17:35,792] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-09-21 08:17:35,792] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f417ec73-d0c6-4033-a55d-ff5e489d7c22.json < /var/lib/heat-config/deployed/f417ec73-d0c6-4033-a55d-ff5e489d7c22.notify.json", > "[2018-09-21 08:17:36,218] (heat-config) [INFO] ", > "[2018-09-21 08:17:36,218] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment ControllerAllNodesValidationDeployment] **** >Friday 21 September 2018 08:17:36 -0400 (0:00:00.090) 0:00:58.817 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:17:36 -0400 (0:00:00.042) 0:00:58.860 ****** >ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "24e8e89e-421b-4ea8-baa2-243fd4773a32"}, "changed": false} > >TASK [Render deployment file for ControllerArtifactsDeploy] ******************** >Friday 21 September 2018 08:17:36 -0400 (0:00:00.084) 0:00:58.945 ****** >changed: [controller-0] => {"changed": true, "checksum": "ecad37e9a93d24dc3ec671b7affba8f349f92361", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerArtifactsDeploy-24e8e89e-421b-4ea8-baa2-243fd4773a32", "gid": 0, "group": "root", "md5sum": "3d0db96f815bda3dc3728a4e8e0f62f2", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2021, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532256.52-105274500522498/source", "state": "file", "uid": 0} > >TASK [Check if deployed file exists for ControllerArtifactsDeploy] ************* >Friday 21 September 2018 08:17:37 -0400 (0:00:00.547) 0:00:59.492 ****** >ok: [controller-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for ControllerArtifactsDeploy] ************** >Friday 21 September 2018 08:17:37 -0400 (0:00:00.227) 0:00:59.719 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for ControllerArtifactsDeploy when previous deployment failed] *** >Friday 21 September 2018 08:17:37 -0400 (0:00:00.040) 0:00:59.760 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for ControllerArtifactsDeploy] **************** >Friday 21 September 2018 08:17:37 -0400 (0:00:00.045) 0:00:59.806 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment ControllerArtifactsDeploy] ******************************** >Friday 21 September 2018 08:17:37 -0400 (0:00:00.044) 0:00:59.851 ****** >changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/24e8e89e-421b-4ea8-baa2-243fd4773a32.notify.json)", "delta": "0:00:00.425372", "end": "2018-09-21 08:17:37.998104", "rc": 0, "start": "2018-09-21 08:17:37.572732", "stderr": "[2018-09-21 08:17:37,598] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/24e8e89e-421b-4ea8-baa2-243fd4773a32.json\n[2018-09-21 08:17:37,629] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-09-21 08:17:37,629] (heat-config) [DEBUG] [2018-09-21 08:17:37,620] (heat-config) [INFO] artifact_urls=\n[2018-09-21 08:17:37,620] (heat-config) [INFO] deploy_server_id=10bf651c-fd66-4074-9929-ddfdd495b40b\n[2018-09-21 08:17:37,620] (heat-config) [INFO] deploy_action=CREATE\n[2018-09-21 08:17:37,620] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-htpxhfrkye7c-ControllerArtifactsDeploy-fqmchhii7vda-0-uyh563jod5un/7fa4d7bc-9e02-401a-a445-56f847d973e6\n[2018-09-21 08:17:37,620] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-09-21 08:17:37,620] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-09-21 08:17:37,620] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/24e8e89e-421b-4ea8-baa2-243fd4773a32\n[2018-09-21 08:17:37,625] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-09-21 08:17:37,626] (heat-config) [DEBUG] \n[2018-09-21 08:17:37,626] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/24e8e89e-421b-4ea8-baa2-243fd4773a32\n\n[2018-09-21 08:17:37,629] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-09-21 08:17:37,629] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/24e8e89e-421b-4ea8-baa2-243fd4773a32.json < /var/lib/heat-config/deployed/24e8e89e-421b-4ea8-baa2-243fd4773a32.notify.json\n[2018-09-21 08:17:37,990] (heat-config) [INFO] \n[2018-09-21 08:17:37,991] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:17:37,598] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/24e8e89e-421b-4ea8-baa2-243fd4773a32.json", "[2018-09-21 08:17:37,629] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-09-21 08:17:37,629] (heat-config) [DEBUG] [2018-09-21 08:17:37,620] (heat-config) [INFO] artifact_urls=", "[2018-09-21 08:17:37,620] (heat-config) [INFO] deploy_server_id=10bf651c-fd66-4074-9929-ddfdd495b40b", "[2018-09-21 08:17:37,620] (heat-config) [INFO] deploy_action=CREATE", "[2018-09-21 08:17:37,620] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-htpxhfrkye7c-ControllerArtifactsDeploy-fqmchhii7vda-0-uyh563jod5un/7fa4d7bc-9e02-401a-a445-56f847d973e6", "[2018-09-21 08:17:37,620] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-09-21 08:17:37,620] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-09-21 08:17:37,620] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/24e8e89e-421b-4ea8-baa2-243fd4773a32", "[2018-09-21 08:17:37,625] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-09-21 08:17:37,626] (heat-config) [DEBUG] ", "[2018-09-21 08:17:37,626] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/24e8e89e-421b-4ea8-baa2-243fd4773a32", "", "[2018-09-21 08:17:37,629] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-09-21 08:17:37,629] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/24e8e89e-421b-4ea8-baa2-243fd4773a32.json < /var/lib/heat-config/deployed/24e8e89e-421b-4ea8-baa2-243fd4773a32.notify.json", "[2018-09-21 08:17:37,990] (heat-config) [INFO] ", "[2018-09-21 08:17:37,991] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > >TASK [Output for ControllerArtifactsDeploy] ************************************ >Friday 21 September 2018 08:17:38 -0400 (0:00:00.648) 0:01:00.499 ****** >ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:17:37,598] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/24e8e89e-421b-4ea8-baa2-243fd4773a32.json", > "[2018-09-21 08:17:37,629] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-09-21 08:17:37,629] (heat-config) [DEBUG] [2018-09-21 08:17:37,620] (heat-config) [INFO] artifact_urls=", > "[2018-09-21 08:17:37,620] (heat-config) [INFO] deploy_server_id=10bf651c-fd66-4074-9929-ddfdd495b40b", > "[2018-09-21 08:17:37,620] (heat-config) [INFO] deploy_action=CREATE", > "[2018-09-21 08:17:37,620] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-htpxhfrkye7c-ControllerArtifactsDeploy-fqmchhii7vda-0-uyh563jod5un/7fa4d7bc-9e02-401a-a445-56f847d973e6", > "[2018-09-21 08:17:37,620] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-09-21 08:17:37,620] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-09-21 08:17:37,620] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/24e8e89e-421b-4ea8-baa2-243fd4773a32", > "[2018-09-21 08:17:37,625] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-09-21 08:17:37,626] (heat-config) [DEBUG] ", > "[2018-09-21 08:17:37,626] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/24e8e89e-421b-4ea8-baa2-243fd4773a32", > "", > "[2018-09-21 08:17:37,629] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-09-21 08:17:37,629] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/24e8e89e-421b-4ea8-baa2-243fd4773a32.json < /var/lib/heat-config/deployed/24e8e89e-421b-4ea8-baa2-243fd4773a32.notify.json", > "[2018-09-21 08:17:37,990] (heat-config) [INFO] ", > "[2018-09-21 08:17:37,991] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment ControllerArtifactsDeploy] ***************** >Friday 21 September 2018 08:17:38 -0400 (0:00:00.085) 0:01:00.585 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:17:38 -0400 (0:00:00.042) 0:01:00.627 ****** >ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "43f2827d-285d-4710-be29-33de4e80ea0d"}, "changed": false} > >TASK [Render deployment file for ControllerHostPrepDeployment] ***************** >Friday 21 September 2018 08:17:38 -0400 (0:00:00.113) 0:01:00.741 ****** >changed: [controller-0] => {"changed": true, "checksum": "a5e0ffb81fc0074be60ef87ca1e67a47777aef73", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerHostPrepDeployment-43f2827d-285d-4710-be29-33de4e80ea0d", "gid": 0, "group": "root", "md5sum": "f20ebcfefbc7d176c7369d3d43c01d03", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 20800, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532258.34-189764150270374/source", "state": "file", "uid": 0} > >TASK [Check if deployed file exists for ControllerHostPrepDeployment] ********** >Friday 21 September 2018 08:17:38 -0400 (0:00:00.628) 0:01:01.369 ****** >ok: [controller-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for ControllerHostPrepDeployment] *********** >Friday 21 September 2018 08:17:39 -0400 (0:00:00.220) 0:01:01.589 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for ControllerHostPrepDeployment when previous deployment failed] *** >Friday 21 September 2018 08:17:39 -0400 (0:00:00.047) 0:01:01.637 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for ControllerHostPrepDeployment] ************* >Friday 21 September 2018 08:17:39 -0400 (0:00:00.047) 0:01:01.684 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment ControllerHostPrepDeployment] ***************************** >Friday 21 September 2018 08:17:39 -0400 (0:00:00.047) 0:01:01.732 ****** >changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/43f2827d-285d-4710-be29-33de4e80ea0d.notify.json)", "delta": "0:00:06.532000", "end": "2018-09-21 08:17:45.991764", "rc": 0, "start": "2018-09-21 08:17:39.459764", "stderr": "[2018-09-21 08:17:39,487] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/43f2827d-285d-4710-be29-33de4e80ea0d.json\n[2018-09-21 08:17:45,598] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-09-21 08:17:45,598] (heat-config) [DEBUG] [2018-09-21 08:17:39,511] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/43f2827d-285d-4710-be29-33de4e80ea0d_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/43f2827d-285d-4710-be29-33de4e80ea0d_variables.json\n[2018-09-21 08:17:45,594] (heat-config) [INFO] Return code 0\n[2018-09-21 08:17:45,594] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \n\n\n[2018-09-21 08:17:45,595] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/43f2827d-285d-4710-be29-33de4e80ea0d_playbook.yaml\n\n[2018-09-21 08:17:45,599] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-09-21 08:17:45,599] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/43f2827d-285d-4710-be29-33de4e80ea0d.json < /var/lib/heat-config/deployed/43f2827d-285d-4710-be29-33de4e80ea0d.notify.json\n[2018-09-21 08:17:45,985] (heat-config) [INFO] \n[2018-09-21 08:17:45,985] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:17:39,487] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/43f2827d-285d-4710-be29-33de4e80ea0d.json", "[2018-09-21 08:17:45,598] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-09-21 08:17:45,598] (heat-config) [DEBUG] [2018-09-21 08:17:39,511] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/43f2827d-285d-4710-be29-33de4e80ea0d_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/43f2827d-285d-4710-be29-33de4e80ea0d_variables.json", "[2018-09-21 08:17:45,594] (heat-config) [INFO] Return code 0", "[2018-09-21 08:17:45,594] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=3 changed=2 unreachable=0 failed=0 ", "", "", "[2018-09-21 08:17:45,595] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/43f2827d-285d-4710-be29-33de4e80ea0d_playbook.yaml", "", "[2018-09-21 08:17:45,599] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-09-21 08:17:45,599] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/43f2827d-285d-4710-be29-33de4e80ea0d.json < /var/lib/heat-config/deployed/43f2827d-285d-4710-be29-33de4e80ea0d.notify.json", "[2018-09-21 08:17:45,985] (heat-config) [INFO] ", "[2018-09-21 08:17:45,985] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > > >TASK [Output for ControllerHostPrepDeployment] ********************************* >Friday 21 September 2018 08:17:46 -0400 (0:00:06.758) 0:01:08.491 ****** >ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:17:39,487] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/43f2827d-285d-4710-be29-33de4e80ea0d.json", > "[2018-09-21 08:17:45,598] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-09-21 08:17:45,598] (heat-config) [DEBUG] [2018-09-21 08:17:39,511] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/43f2827d-285d-4710-be29-33de4e80ea0d_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/43f2827d-285d-4710-be29-33de4e80ea0d_variables.json", > "[2018-09-21 08:17:45,594] (heat-config) [INFO] Return code 0", > "[2018-09-21 08:17:45,594] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=3 changed=2 unreachable=0 failed=0 ", > "", > "", > "[2018-09-21 08:17:45,595] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/43f2827d-285d-4710-be29-33de4e80ea0d_playbook.yaml", > "", > "[2018-09-21 08:17:45,599] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-09-21 08:17:45,599] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/43f2827d-285d-4710-be29-33de4e80ea0d.json < /var/lib/heat-config/deployed/43f2827d-285d-4710-be29-33de4e80ea0d.notify.json", > "[2018-09-21 08:17:45,985] (heat-config) [INFO] ", > "[2018-09-21 08:17:45,985] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment ControllerHostPrepDeployment] ************** >Friday 21 September 2018 08:17:46 -0400 (0:00:00.087) 0:01:08.578 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:17:46 -0400 (0:00:00.038) 0:01:08.616 ****** >ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "4f92eb73-4428-4b53-85ba-03af652776a5"}, "changed": false} > >TASK [Render deployment file for NovaComputeUpgradeInitDeployment] ************* >Friday 21 September 2018 08:17:46 -0400 (0:00:00.099) 0:01:08.716 ****** >changed: [compute-0] => {"changed": true, "checksum": "ce927fd489cc74fcf3a427651a0034db359c258a", "dest": "/var/lib/heat-config/tripleo-config-download/NovaComputeUpgradeInitDeployment-4f92eb73-4428-4b53-85ba-03af652776a5", "gid": 0, "group": "root", "md5sum": "767d4a6996863fad364dc75d6da5cff7", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1182, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532266.29-95222529972064/source", "state": "file", "uid": 0} > >TASK [Check if deployed file exists for NovaComputeUpgradeInitDeployment] ****** >Friday 21 September 2018 08:17:46 -0400 (0:00:00.566) 0:01:09.283 ****** >ok: [compute-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for NovaComputeUpgradeInitDeployment] ******* >Friday 21 September 2018 08:17:47 -0400 (0:00:00.224) 0:01:09.508 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for NovaComputeUpgradeInitDeployment when previous deployment failed] *** >Friday 21 September 2018 08:17:47 -0400 (0:00:00.041) 0:01:09.550 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for NovaComputeUpgradeInitDeployment] ********* >Friday 21 September 2018 08:17:47 -0400 (0:00:00.043) 0:01:09.593 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment NovaComputeUpgradeInitDeployment] ************************* >Friday 21 September 2018 08:17:47 -0400 (0:00:00.040) 0:01:09.633 ****** >changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/4f92eb73-4428-4b53-85ba-03af652776a5.notify.json)", "delta": "0:00:00.441669", "end": "2018-09-21 08:17:47.776639", "rc": 0, "start": "2018-09-21 08:17:47.334970", "stderr": "[2018-09-21 08:17:47,361] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/4f92eb73-4428-4b53-85ba-03af652776a5.json\n[2018-09-21 08:17:47,389] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-09-21 08:17:47,389] (heat-config) [DEBUG] [2018-09-21 08:17:47,381] (heat-config) [INFO] deploy_server_id=41df08ab-b98d-4a4e-b91d-2da74cba2af6\n[2018-09-21 08:17:47,382] (heat-config) [INFO] deploy_action=CREATE\n[2018-09-21 08:17:47,382] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-vwvs6vnygabp-0-3khh4rv6h5h3-NovaComputeUpgradeInitDeployment-2a3wco6i5q2l/18a3eac3-e170-4126-be92-f3d067864378\n[2018-09-21 08:17:47,382] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-09-21 08:17:47,382] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-09-21 08:17:47,382] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/4f92eb73-4428-4b53-85ba-03af652776a5\n[2018-09-21 08:17:47,385] (heat-config) [INFO] \n[2018-09-21 08:17:47,386] (heat-config) [DEBUG] \n[2018-09-21 08:17:47,386] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/4f92eb73-4428-4b53-85ba-03af652776a5\n\n[2018-09-21 08:17:47,389] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-09-21 08:17:47,389] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/4f92eb73-4428-4b53-85ba-03af652776a5.json < /var/lib/heat-config/deployed/4f92eb73-4428-4b53-85ba-03af652776a5.notify.json\n[2018-09-21 08:17:47,769] (heat-config) [INFO] \n[2018-09-21 08:17:47,769] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:17:47,361] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/4f92eb73-4428-4b53-85ba-03af652776a5.json", "[2018-09-21 08:17:47,389] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-09-21 08:17:47,389] (heat-config) [DEBUG] [2018-09-21 08:17:47,381] (heat-config) [INFO] deploy_server_id=41df08ab-b98d-4a4e-b91d-2da74cba2af6", "[2018-09-21 08:17:47,382] (heat-config) [INFO] deploy_action=CREATE", "[2018-09-21 08:17:47,382] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-vwvs6vnygabp-0-3khh4rv6h5h3-NovaComputeUpgradeInitDeployment-2a3wco6i5q2l/18a3eac3-e170-4126-be92-f3d067864378", "[2018-09-21 08:17:47,382] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-09-21 08:17:47,382] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-09-21 08:17:47,382] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/4f92eb73-4428-4b53-85ba-03af652776a5", "[2018-09-21 08:17:47,385] (heat-config) [INFO] ", "[2018-09-21 08:17:47,386] (heat-config) [DEBUG] ", "[2018-09-21 08:17:47,386] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/4f92eb73-4428-4b53-85ba-03af652776a5", "", "[2018-09-21 08:17:47,389] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-09-21 08:17:47,389] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/4f92eb73-4428-4b53-85ba-03af652776a5.json < /var/lib/heat-config/deployed/4f92eb73-4428-4b53-85ba-03af652776a5.notify.json", "[2018-09-21 08:17:47,769] (heat-config) [INFO] ", "[2018-09-21 08:17:47,769] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > >TASK [Output for NovaComputeUpgradeInitDeployment] ***************************** >Friday 21 September 2018 08:17:47 -0400 (0:00:00.654) 0:01:10.288 ****** >ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:17:47,361] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/4f92eb73-4428-4b53-85ba-03af652776a5.json", > "[2018-09-21 08:17:47,389] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-09-21 08:17:47,389] (heat-config) [DEBUG] [2018-09-21 08:17:47,381] (heat-config) [INFO] deploy_server_id=41df08ab-b98d-4a4e-b91d-2da74cba2af6", > "[2018-09-21 08:17:47,382] (heat-config) [INFO] deploy_action=CREATE", > "[2018-09-21 08:17:47,382] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-vwvs6vnygabp-0-3khh4rv6h5h3-NovaComputeUpgradeInitDeployment-2a3wco6i5q2l/18a3eac3-e170-4126-be92-f3d067864378", > "[2018-09-21 08:17:47,382] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-09-21 08:17:47,382] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-09-21 08:17:47,382] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/4f92eb73-4428-4b53-85ba-03af652776a5", > "[2018-09-21 08:17:47,385] (heat-config) [INFO] ", > "[2018-09-21 08:17:47,386] (heat-config) [DEBUG] ", > "[2018-09-21 08:17:47,386] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/4f92eb73-4428-4b53-85ba-03af652776a5", > "", > "[2018-09-21 08:17:47,389] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-09-21 08:17:47,389] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/4f92eb73-4428-4b53-85ba-03af652776a5.json < /var/lib/heat-config/deployed/4f92eb73-4428-4b53-85ba-03af652776a5.notify.json", > "[2018-09-21 08:17:47,769] (heat-config) [INFO] ", > "[2018-09-21 08:17:47,769] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment NovaComputeUpgradeInitDeployment] ********** >Friday 21 September 2018 08:17:47 -0400 (0:00:00.079) 0:01:10.367 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:17:47 -0400 (0:00:00.035) 0:01:10.403 ****** >ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "fec5b064-2850-42d0-9f3f-d3a25fd604bb"}, "changed": false} > >TASK [Render deployment file for NovaComputeDeployment] ************************ >Friday 21 September 2018 08:17:48 -0400 (0:00:00.177) 0:01:10.581 ****** >changed: [compute-0] => {"changed": true, "checksum": "99b1827f12aa098073084ca28033670c430fe2f0", "dest": "/var/lib/heat-config/tripleo-config-download/NovaComputeDeployment-fec5b064-2850-42d0-9f3f-d3a25fd604bb", "gid": 0, "group": "root", "md5sum": "e498aa052f200ba3937746ff12d8c2c8", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 22240, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532268.26-63634966315563/source", "state": "file", "uid": 0} > >TASK [Check if deployed file exists for NovaComputeDeployment] ***************** >Friday 21 September 2018 08:17:48 -0400 (0:00:00.622) 0:01:11.203 ****** >ok: [compute-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for NovaComputeDeployment] ****************** >Friday 21 September 2018 08:17:48 -0400 (0:00:00.203) 0:01:11.407 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for NovaComputeDeployment when previous deployment failed] *** >Friday 21 September 2018 08:17:48 -0400 (0:00:00.042) 0:01:11.450 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for NovaComputeDeployment] ******************** >Friday 21 September 2018 08:17:49 -0400 (0:00:00.039) 0:01:11.490 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment NovaComputeDeployment] ************************************ >Friday 21 September 2018 08:17:49 -0400 (0:00:00.039) 0:01:11.529 ****** >changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/fec5b064-2850-42d0-9f3f-d3a25fd604bb.notify.json)", "delta": "0:00:00.544355", "end": "2018-09-21 08:17:49.768497", "rc": 0, "start": "2018-09-21 08:17:49.224142", "stderr": "[2018-09-21 08:17:49,251] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/fec5b064-2850-42d0-9f3f-d3a25fd604bb.json\n[2018-09-21 08:17:49,376] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-09-21 08:17:49,376] (heat-config) [DEBUG] \n[2018-09-21 08:17:49,376] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-09-21 08:17:49,376] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/fec5b064-2850-42d0-9f3f-d3a25fd604bb.json < /var/lib/heat-config/deployed/fec5b064-2850-42d0-9f3f-d3a25fd604bb.notify.json\n[2018-09-21 08:17:49,762] (heat-config) [INFO] \n[2018-09-21 08:17:49,762] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:17:49,251] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/fec5b064-2850-42d0-9f3f-d3a25fd604bb.json", "[2018-09-21 08:17:49,376] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-09-21 08:17:49,376] (heat-config) [DEBUG] ", "[2018-09-21 08:17:49,376] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-09-21 08:17:49,376] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/fec5b064-2850-42d0-9f3f-d3a25fd604bb.json < /var/lib/heat-config/deployed/fec5b064-2850-42d0-9f3f-d3a25fd604bb.notify.json", "[2018-09-21 08:17:49,762] (heat-config) [INFO] ", "[2018-09-21 08:17:49,762] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > >TASK [Output for NovaComputeDeployment] **************************************** >Friday 21 September 2018 08:17:49 -0400 (0:00:00.748) 0:01:12.278 ****** >ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:17:49,251] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/fec5b064-2850-42d0-9f3f-d3a25fd604bb.json", > "[2018-09-21 08:17:49,376] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-09-21 08:17:49,376] (heat-config) [DEBUG] ", > "[2018-09-21 08:17:49,376] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-09-21 08:17:49,376] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/fec5b064-2850-42d0-9f3f-d3a25fd604bb.json < /var/lib/heat-config/deployed/fec5b064-2850-42d0-9f3f-d3a25fd604bb.notify.json", > "[2018-09-21 08:17:49,762] (heat-config) [INFO] ", > "[2018-09-21 08:17:49,762] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment NovaComputeDeployment] ********************* >Friday 21 September 2018 08:17:49 -0400 (0:00:00.072) 0:01:12.350 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:17:49 -0400 (0:00:00.033) 0:01:12.383 ****** >ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "1008fb43-c144-45e7-9aef-74c0aec9fcdc"}, "changed": false} > >TASK [Render deployment file for ComputeHostsDeployment] *********************** >Friday 21 September 2018 08:17:49 -0400 (0:00:00.086) 0:01:12.469 ****** >changed: [compute-0] => {"changed": true, "checksum": "8761f24f6a938782b4f0e5e5805623b8b0ebf577", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeHostsDeployment-1008fb43-c144-45e7-9aef-74c0aec9fcdc", "gid": 0, "group": "root", "md5sum": "54b4178b7a0f8fab100b49d18dc341a9", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4419, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532270.04-29871920637895/source", "state": "file", "uid": 0} > >TASK [Check if deployed file exists for ComputeHostsDeployment] **************** >Friday 21 September 2018 08:17:50 -0400 (0:00:00.505) 0:01:12.975 ****** >ok: [compute-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for ComputeHostsDeployment] ***************** >Friday 21 September 2018 08:17:50 -0400 (0:00:00.219) 0:01:13.195 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for ComputeHostsDeployment when previous deployment failed] *** >Friday 21 September 2018 08:17:50 -0400 (0:00:00.041) 0:01:13.237 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for ComputeHostsDeployment] ******************* >Friday 21 September 2018 08:17:50 -0400 (0:00:00.040) 0:01:13.277 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment ComputeHostsDeployment] *********************************** >Friday 21 September 2018 08:17:50 -0400 (0:00:00.040) 0:01:13.317 ****** >changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/1008fb43-c144-45e7-9aef-74c0aec9fcdc.notify.json)", "delta": "0:00:00.468400", "end": "2018-09-21 08:17:51.489776", "rc": 0, "start": "2018-09-21 08:17:51.021376", "stderr": "[2018-09-21 08:17:51,045] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/1008fb43-c144-45e7-9aef-74c0aec9fcdc.json\n[2018-09-21 08:17:51,095] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-09-21 08:17:51,095] (heat-config) [DEBUG] [2018-09-21 08:17:51,066] (heat-config) [INFO] hosts=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-09-21 08:17:51,067] (heat-config) [INFO] deploy_server_id=41df08ab-b98d-4a4e-b91d-2da74cba2af6\n[2018-09-21 08:17:51,067] (heat-config) [INFO] deploy_action=CREATE\n[2018-09-21 08:17:51,067] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-yna7l6c4rmvh-0-2e5fnlfsg55x/790dcfdb-af9d-4098-b6bf-9044a6221bb8\n[2018-09-21 08:17:51,067] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-09-21 08:17:51,067] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-09-21 08:17:51,067] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/1008fb43-c144-45e7-9aef-74c0aec9fcdc\n[2018-09-21 08:17:51,091] (heat-config) [INFO] \n[2018-09-21 08:17:51,091] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n++ hostname -s\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n++ hostname -s\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n++ hostname -s\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n++ hostname -s\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n++ hostname -s\n+ sed -i /compute-0/d /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-09-21 08:17:51,091] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/1008fb43-c144-45e7-9aef-74c0aec9fcdc\n\n[2018-09-21 08:17:51,095] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-09-21 08:17:51,096] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1008fb43-c144-45e7-9aef-74c0aec9fcdc.json < /var/lib/heat-config/deployed/1008fb43-c144-45e7-9aef-74c0aec9fcdc.notify.json\n[2018-09-21 08:17:51,483] (heat-config) [INFO] \n[2018-09-21 08:17:51,484] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:17:51,045] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/1008fb43-c144-45e7-9aef-74c0aec9fcdc.json", "[2018-09-21 08:17:51,095] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-09-21 08:17:51,095] (heat-config) [DEBUG] [2018-09-21 08:17:51,066] (heat-config) [INFO] hosts=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-09-21 08:17:51,067] (heat-config) [INFO] deploy_server_id=41df08ab-b98d-4a4e-b91d-2da74cba2af6", "[2018-09-21 08:17:51,067] (heat-config) [INFO] deploy_action=CREATE", "[2018-09-21 08:17:51,067] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-yna7l6c4rmvh-0-2e5fnlfsg55x/790dcfdb-af9d-4098-b6bf-9044a6221bb8", "[2018-09-21 08:17:51,067] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-09-21 08:17:51,067] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-09-21 08:17:51,067] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/1008fb43-c144-45e7-9aef-74c0aec9fcdc", "[2018-09-21 08:17:51,091] (heat-config) [INFO] ", "[2018-09-21 08:17:51,091] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "++ hostname -s", "+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "++ hostname -s", "+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "++ hostname -s", "+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "++ hostname -s", "+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "++ hostname -s", "+ sed -i /compute-0/d /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-09-21 08:17:51,091] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/1008fb43-c144-45e7-9aef-74c0aec9fcdc", "", "[2018-09-21 08:17:51,095] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-09-21 08:17:51,096] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1008fb43-c144-45e7-9aef-74c0aec9fcdc.json < /var/lib/heat-config/deployed/1008fb43-c144-45e7-9aef-74c0aec9fcdc.notify.json", "[2018-09-21 08:17:51,483] (heat-config) [INFO] ", "[2018-09-21 08:17:51,484] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > >TASK [Output for ComputeHostsDeployment] *************************************** >Friday 21 September 2018 08:17:51 -0400 (0:00:00.724) 0:01:14.042 ****** >ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:17:51,045] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/1008fb43-c144-45e7-9aef-74c0aec9fcdc.json", > "[2018-09-21 08:17:51,095] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-09-21 08:17:51,095] (heat-config) [DEBUG] [2018-09-21 08:17:51,066] (heat-config) [INFO] hosts=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-09-21 08:17:51,067] (heat-config) [INFO] deploy_server_id=41df08ab-b98d-4a4e-b91d-2da74cba2af6", > "[2018-09-21 08:17:51,067] (heat-config) [INFO] deploy_action=CREATE", > "[2018-09-21 08:17:51,067] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-yna7l6c4rmvh-0-2e5fnlfsg55x/790dcfdb-af9d-4098-b6bf-9044a6221bb8", > "[2018-09-21 08:17:51,067] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-09-21 08:17:51,067] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-09-21 08:17:51,067] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/1008fb43-c144-45e7-9aef-74c0aec9fcdc", > "[2018-09-21 08:17:51,091] (heat-config) [INFO] ", > "[2018-09-21 08:17:51,091] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-09-21 08:17:51,091] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/1008fb43-c144-45e7-9aef-74c0aec9fcdc", > "", > "[2018-09-21 08:17:51,095] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-09-21 08:17:51,096] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1008fb43-c144-45e7-9aef-74c0aec9fcdc.json < /var/lib/heat-config/deployed/1008fb43-c144-45e7-9aef-74c0aec9fcdc.notify.json", > "[2018-09-21 08:17:51,483] (heat-config) [INFO] ", > "[2018-09-21 08:17:51,484] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment ComputeHostsDeployment] ******************** >Friday 21 September 2018 08:17:51 -0400 (0:00:00.117) 0:01:14.159 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:17:51 -0400 (0:00:00.034) 0:01:14.194 ****** >ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "20d5b24f-63bf-4d03-a2a4-2249714c2e3c"}, "changed": false} > >TASK [Render deployment file for ComputeAllNodesDeployment] ******************** >Friday 21 September 2018 08:17:51 -0400 (0:00:00.173) 0:01:14.368 ****** >changed: [compute-0] => {"changed": true, "checksum": "9d21e011a9d0d6a7d333e6601dc246f44d571c78", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeAllNodesDeployment-20d5b24f-63bf-4d03-a2a4-2249714c2e3c", "gid": 0, "group": "root", "md5sum": "562ec06c66da2bcf9a31bdfdd3935de9", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19530, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532272.05-26047740846388/source", "state": "file", "uid": 0} > >TASK [Check if deployed file exists for ComputeAllNodesDeployment] ************* >Friday 21 September 2018 08:17:52 -0400 (0:00:00.631) 0:01:15.000 ****** >ok: [compute-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for ComputeAllNodesDeployment] ************** >Friday 21 September 2018 08:17:52 -0400 (0:00:00.205) 0:01:15.205 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for ComputeAllNodesDeployment when previous deployment failed] *** >Friday 21 September 2018 08:17:52 -0400 (0:00:00.038) 0:01:15.243 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for ComputeAllNodesDeployment] **************** >Friday 21 September 2018 08:17:52 -0400 (0:00:00.038) 0:01:15.281 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment ComputeAllNodesDeployment] ******************************** >Friday 21 September 2018 08:17:52 -0400 (0:00:00.037) 0:01:15.319 ****** >changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/20d5b24f-63bf-4d03-a2a4-2249714c2e3c.notify.json)", "delta": "0:00:00.560533", "end": "2018-09-21 08:17:53.577856", "rc": 0, "start": "2018-09-21 08:17:53.017323", "stderr": "[2018-09-21 08:17:53,045] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/20d5b24f-63bf-4d03-a2a4-2249714c2e3c.json\n[2018-09-21 08:17:53,173] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-09-21 08:17:53,173] (heat-config) [DEBUG] \n[2018-09-21 08:17:53,173] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-09-21 08:17:53,174] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/20d5b24f-63bf-4d03-a2a4-2249714c2e3c.json < /var/lib/heat-config/deployed/20d5b24f-63bf-4d03-a2a4-2249714c2e3c.notify.json\n[2018-09-21 08:17:53,571] (heat-config) [INFO] \n[2018-09-21 08:17:53,571] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:17:53,045] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/20d5b24f-63bf-4d03-a2a4-2249714c2e3c.json", "[2018-09-21 08:17:53,173] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-09-21 08:17:53,173] (heat-config) [DEBUG] ", "[2018-09-21 08:17:53,173] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-09-21 08:17:53,174] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/20d5b24f-63bf-4d03-a2a4-2249714c2e3c.json < /var/lib/heat-config/deployed/20d5b24f-63bf-4d03-a2a4-2249714c2e3c.notify.json", "[2018-09-21 08:17:53,571] (heat-config) [INFO] ", "[2018-09-21 08:17:53,571] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > >TASK [Output for ComputeAllNodesDeployment] ************************************ >Friday 21 September 2018 08:17:53 -0400 (0:00:00.770) 0:01:16.090 ****** >ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:17:53,045] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/20d5b24f-63bf-4d03-a2a4-2249714c2e3c.json", > "[2018-09-21 08:17:53,173] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-09-21 08:17:53,173] (heat-config) [DEBUG] ", > "[2018-09-21 08:17:53,173] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-09-21 08:17:53,174] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/20d5b24f-63bf-4d03-a2a4-2249714c2e3c.json < /var/lib/heat-config/deployed/20d5b24f-63bf-4d03-a2a4-2249714c2e3c.notify.json", > "[2018-09-21 08:17:53,571] (heat-config) [INFO] ", > "[2018-09-21 08:17:53,571] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment ComputeAllNodesDeployment] ***************** >Friday 21 September 2018 08:17:53 -0400 (0:00:00.077) 0:01:16.168 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:17:53 -0400 (0:00:00.036) 0:01:16.204 ****** >ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "351b5a3a-b587-4d7c-b79f-46c3f98c60f3"}, "changed": false} > >TASK [Render deployment file for ComputeAllNodesValidationDeployment] ********** >Friday 21 September 2018 08:17:53 -0400 (0:00:00.082) 0:01:16.287 ****** >changed: [compute-0] => {"changed": true, "checksum": "346726481a2a90d25618953e99e22929ccdfbd71", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeAllNodesValidationDeployment-351b5a3a-b587-4d7c-b79f-46c3f98c60f3", "gid": 0, "group": "root", "md5sum": "e74e328492482046b8e68a0e43800d4c", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4935, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532273.86-9375674760100/source", "state": "file", "uid": 0} > >TASK [Check if deployed file exists for ComputeAllNodesValidationDeployment] *** >Friday 21 September 2018 08:17:54 -0400 (0:00:00.547) 0:01:16.834 ****** >ok: [compute-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for ComputeAllNodesValidationDeployment] **** >Friday 21 September 2018 08:17:54 -0400 (0:00:00.239) 0:01:17.074 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for ComputeAllNodesValidationDeployment when previous deployment failed] *** >Friday 21 September 2018 08:17:54 -0400 (0:00:00.044) 0:01:17.118 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for ComputeAllNodesValidationDeployment] ****** >Friday 21 September 2018 08:17:54 -0400 (0:00:00.046) 0:01:17.165 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment ComputeAllNodesValidationDeployment] ********************** >Friday 21 September 2018 08:17:54 -0400 (0:00:00.044) 0:01:17.209 ****** >changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/351b5a3a-b587-4d7c-b79f-46c3f98c60f3.notify.json)", "delta": "0:00:00.952117", "end": "2018-09-21 08:17:55.867724", "rc": 0, "start": "2018-09-21 08:17:54.915607", "stderr": "[2018-09-21 08:17:54,940] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/351b5a3a-b587-4d7c-b79f-46c3f98c60f3.json\n[2018-09-21 08:17:55,469] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.17 for local network 172.17.1.0/24.\\nPing to 172.17.1.17 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.22 for local network 172.17.2.0/24.\\nPing to 172.17.2.22 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.16 for local network 172.17.3.0/24.\\nPing to 172.17.3.16 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.18 for local network 192.168.24.0/24.\\nPing to 192.168.24.18 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-09-21 08:17:55,469] (heat-config) [DEBUG] [2018-09-21 08:17:54,959] (heat-config) [INFO] ping_test_ips=172.17.3.16 172.17.4.14 172.17.1.17 172.17.2.22 10.0.0.117 192.168.24.18\n[2018-09-21 08:17:54,960] (heat-config) [INFO] validate_fqdn=False\n[2018-09-21 08:17:54,960] (heat-config) [INFO] validate_ntp=True\n[2018-09-21 08:17:54,960] (heat-config) [INFO] deploy_server_id=41df08ab-b98d-4a4e-b91d-2da74cba2af6\n[2018-09-21 08:17:54,960] (heat-config) [INFO] deploy_action=CREATE\n[2018-09-21 08:17:54,960] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-22j5eaf56asv-0-zp2w5mk7hosm/be95f989-e444-4249-b133-cf869ec35b9a\n[2018-09-21 08:17:54,960] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-09-21 08:17:54,960] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-09-21 08:17:54,960] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/351b5a3a-b587-4d7c-b79f-46c3f98c60f3\n[2018-09-21 08:17:55,464] (heat-config) [INFO] Trying to ping 172.17.1.17 for local network 172.17.1.0/24.\nPing to 172.17.1.17 succeeded.\nSUCCESS\nTrying to ping 172.17.2.22 for local network 172.17.2.0/24.\nPing to 172.17.2.22 succeeded.\nSUCCESS\nTrying to ping 172.17.3.16 for local network 172.17.3.0/24.\nPing to 172.17.3.16 succeeded.\nSUCCESS\nTrying to ping 192.168.24.18 for local network 192.168.24.0/24.\nPing to 192.168.24.18 succeeded.\nSUCCESS\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\nSUCCESS\n\n[2018-09-21 08:17:55,464] (heat-config) [DEBUG] \n[2018-09-21 08:17:55,464] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/351b5a3a-b587-4d7c-b79f-46c3f98c60f3\n\n[2018-09-21 08:17:55,469] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-09-21 08:17:55,469] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/351b5a3a-b587-4d7c-b79f-46c3f98c60f3.json < /var/lib/heat-config/deployed/351b5a3a-b587-4d7c-b79f-46c3f98c60f3.notify.json\n[2018-09-21 08:17:55,861] (heat-config) [INFO] \n[2018-09-21 08:17:55,861] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:17:54,940] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/351b5a3a-b587-4d7c-b79f-46c3f98c60f3.json", "[2018-09-21 08:17:55,469] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.17 for local network 172.17.1.0/24.\\nPing to 172.17.1.17 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.22 for local network 172.17.2.0/24.\\nPing to 172.17.2.22 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.16 for local network 172.17.3.0/24.\\nPing to 172.17.3.16 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.18 for local network 192.168.24.0/24.\\nPing to 192.168.24.18 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-09-21 08:17:55,469] (heat-config) [DEBUG] [2018-09-21 08:17:54,959] (heat-config) [INFO] ping_test_ips=172.17.3.16 172.17.4.14 172.17.1.17 172.17.2.22 10.0.0.117 192.168.24.18", "[2018-09-21 08:17:54,960] (heat-config) [INFO] validate_fqdn=False", "[2018-09-21 08:17:54,960] (heat-config) [INFO] validate_ntp=True", "[2018-09-21 08:17:54,960] (heat-config) [INFO] deploy_server_id=41df08ab-b98d-4a4e-b91d-2da74cba2af6", "[2018-09-21 08:17:54,960] (heat-config) [INFO] deploy_action=CREATE", "[2018-09-21 08:17:54,960] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-22j5eaf56asv-0-zp2w5mk7hosm/be95f989-e444-4249-b133-cf869ec35b9a", "[2018-09-21 08:17:54,960] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-09-21 08:17:54,960] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-09-21 08:17:54,960] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/351b5a3a-b587-4d7c-b79f-46c3f98c60f3", "[2018-09-21 08:17:55,464] (heat-config) [INFO] Trying to ping 172.17.1.17 for local network 172.17.1.0/24.", "Ping to 172.17.1.17 succeeded.", "SUCCESS", "Trying to ping 172.17.2.22 for local network 172.17.2.0/24.", "Ping to 172.17.2.22 succeeded.", "SUCCESS", "Trying to ping 172.17.3.16 for local network 172.17.3.0/24.", "Ping to 172.17.3.16 succeeded.", "SUCCESS", "Trying to ping 192.168.24.18 for local network 192.168.24.0/24.", "Ping to 192.168.24.18 succeeded.", "SUCCESS", "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", "SUCCESS", "", "[2018-09-21 08:17:55,464] (heat-config) [DEBUG] ", "[2018-09-21 08:17:55,464] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/351b5a3a-b587-4d7c-b79f-46c3f98c60f3", "", "[2018-09-21 08:17:55,469] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-09-21 08:17:55,469] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/351b5a3a-b587-4d7c-b79f-46c3f98c60f3.json < /var/lib/heat-config/deployed/351b5a3a-b587-4d7c-b79f-46c3f98c60f3.notify.json", "[2018-09-21 08:17:55,861] (heat-config) [INFO] ", "[2018-09-21 08:17:55,861] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > >TASK [Output for ComputeAllNodesValidationDeployment] ************************** >Friday 21 September 2018 08:17:55 -0400 (0:00:01.175) 0:01:18.384 ****** >ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:17:54,940] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/351b5a3a-b587-4d7c-b79f-46c3f98c60f3.json", > "[2018-09-21 08:17:55,469] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.17 for local network 172.17.1.0/24.\\nPing to 172.17.1.17 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.22 for local network 172.17.2.0/24.\\nPing to 172.17.2.22 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.16 for local network 172.17.3.0/24.\\nPing to 172.17.3.16 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.18 for local network 192.168.24.0/24.\\nPing to 192.168.24.18 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-09-21 08:17:55,469] (heat-config) [DEBUG] [2018-09-21 08:17:54,959] (heat-config) [INFO] ping_test_ips=172.17.3.16 172.17.4.14 172.17.1.17 172.17.2.22 10.0.0.117 192.168.24.18", > "[2018-09-21 08:17:54,960] (heat-config) [INFO] validate_fqdn=False", > "[2018-09-21 08:17:54,960] (heat-config) [INFO] validate_ntp=True", > "[2018-09-21 08:17:54,960] (heat-config) [INFO] deploy_server_id=41df08ab-b98d-4a4e-b91d-2da74cba2af6", > "[2018-09-21 08:17:54,960] (heat-config) [INFO] deploy_action=CREATE", > "[2018-09-21 08:17:54,960] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-22j5eaf56asv-0-zp2w5mk7hosm/be95f989-e444-4249-b133-cf869ec35b9a", > "[2018-09-21 08:17:54,960] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-09-21 08:17:54,960] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-09-21 08:17:54,960] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/351b5a3a-b587-4d7c-b79f-46c3f98c60f3", > "[2018-09-21 08:17:55,464] (heat-config) [INFO] Trying to ping 172.17.1.17 for local network 172.17.1.0/24.", > "Ping to 172.17.1.17 succeeded.", > "SUCCESS", > "Trying to ping 172.17.2.22 for local network 172.17.2.0/24.", > "Ping to 172.17.2.22 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.16 for local network 172.17.3.0/24.", > "Ping to 172.17.3.16 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.18 for local network 192.168.24.0/24.", > "Ping to 192.168.24.18 succeeded.", > "SUCCESS", > "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", > "SUCCESS", > "", > "[2018-09-21 08:17:55,464] (heat-config) [DEBUG] ", > "[2018-09-21 08:17:55,464] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/351b5a3a-b587-4d7c-b79f-46c3f98c60f3", > "", > "[2018-09-21 08:17:55,469] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-09-21 08:17:55,469] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/351b5a3a-b587-4d7c-b79f-46c3f98c60f3.json < /var/lib/heat-config/deployed/351b5a3a-b587-4d7c-b79f-46c3f98c60f3.notify.json", > "[2018-09-21 08:17:55,861] (heat-config) [INFO] ", > "[2018-09-21 08:17:55,861] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment ComputeAllNodesValidationDeployment] ******* >Friday 21 September 2018 08:17:56 -0400 (0:00:00.143) 0:01:18.528 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:17:56 -0400 (0:00:00.033) 0:01:18.561 ****** >ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "cdeb36e2-025f-4b68-9fa2-8d17649246c6"}, "changed": false} > >TASK [Render deployment file for ComputeHostPrepDeployment] ******************** >Friday 21 September 2018 08:17:56 -0400 (0:00:00.152) 0:01:18.714 ****** >changed: [compute-0] => {"changed": true, "checksum": "be80eed770dc09eb22837eff7841f4dab4100212", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeHostPrepDeployment-cdeb36e2-025f-4b68-9fa2-8d17649246c6", "gid": 0, "group": "root", "md5sum": "265ad77ad15108d3ecc63dc08490cb31", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 20794, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532276.36-56268267027927/source", "state": "file", "uid": 0} > >TASK [Check if deployed file exists for ComputeHostPrepDeployment] ************* >Friday 21 September 2018 08:17:56 -0400 (0:00:00.643) 0:01:19.358 ****** >ok: [compute-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for ComputeHostPrepDeployment] ************** >Friday 21 September 2018 08:17:57 -0400 (0:00:00.286) 0:01:19.645 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for ComputeHostPrepDeployment when previous deployment failed] *** >Friday 21 September 2018 08:17:57 -0400 (0:00:00.047) 0:01:19.692 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for ComputeHostPrepDeployment] **************** >Friday 21 September 2018 08:17:57 -0400 (0:00:00.086) 0:01:19.778 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment ComputeHostPrepDeployment] ******************************** >Friday 21 September 2018 08:17:57 -0400 (0:00:00.040) 0:01:19.819 ****** >changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/cdeb36e2-025f-4b68-9fa2-8d17649246c6.notify.json)", "delta": "0:00:05.961950", "end": "2018-09-21 08:18:03.486613", "rc": 0, "start": "2018-09-21 08:17:57.524663", "stderr": "[2018-09-21 08:17:57,549] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/cdeb36e2-025f-4b68-9fa2-8d17649246c6.json\n[2018-09-21 08:18:03,082] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-09-21 08:18:03,082] (heat-config) [DEBUG] [2018-09-21 08:17:57,573] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/cdeb36e2-025f-4b68-9fa2-8d17649246c6_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/cdeb36e2-025f-4b68-9fa2-8d17649246c6_variables.json\n[2018-09-21 08:18:03,078] (heat-config) [INFO] Return code 0\n[2018-09-21 08:18:03,078] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \n\n\n[2018-09-21 08:18:03,078] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/cdeb36e2-025f-4b68-9fa2-8d17649246c6_playbook.yaml\n\n[2018-09-21 08:18:03,082] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-09-21 08:18:03,082] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/cdeb36e2-025f-4b68-9fa2-8d17649246c6.json < /var/lib/heat-config/deployed/cdeb36e2-025f-4b68-9fa2-8d17649246c6.notify.json\n[2018-09-21 08:18:03,480] (heat-config) [INFO] \n[2018-09-21 08:18:03,480] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:17:57,549] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/cdeb36e2-025f-4b68-9fa2-8d17649246c6.json", "[2018-09-21 08:18:03,082] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-09-21 08:18:03,082] (heat-config) [DEBUG] [2018-09-21 08:17:57,573] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/cdeb36e2-025f-4b68-9fa2-8d17649246c6_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/cdeb36e2-025f-4b68-9fa2-8d17649246c6_variables.json", "[2018-09-21 08:18:03,078] (heat-config) [INFO] Return code 0", "[2018-09-21 08:18:03,078] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=3 changed=2 unreachable=0 failed=0 ", "", "", "[2018-09-21 08:18:03,078] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/cdeb36e2-025f-4b68-9fa2-8d17649246c6_playbook.yaml", "", "[2018-09-21 08:18:03,082] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-09-21 08:18:03,082] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/cdeb36e2-025f-4b68-9fa2-8d17649246c6.json < /var/lib/heat-config/deployed/cdeb36e2-025f-4b68-9fa2-8d17649246c6.notify.json", "[2018-09-21 08:18:03,480] (heat-config) [INFO] ", "[2018-09-21 08:18:03,480] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > >TASK [Output for ComputeHostPrepDeployment] ************************************ >Friday 21 September 2018 08:18:03 -0400 (0:00:06.183) 0:01:26.002 ****** >ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:17:57,549] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/cdeb36e2-025f-4b68-9fa2-8d17649246c6.json", > "[2018-09-21 08:18:03,082] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-09-21 08:18:03,082] (heat-config) [DEBUG] [2018-09-21 08:17:57,573] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/cdeb36e2-025f-4b68-9fa2-8d17649246c6_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/cdeb36e2-025f-4b68-9fa2-8d17649246c6_variables.json", > "[2018-09-21 08:18:03,078] (heat-config) [INFO] Return code 0", > "[2018-09-21 08:18:03,078] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=3 changed=2 unreachable=0 failed=0 ", > "", > "", > "[2018-09-21 08:18:03,078] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/cdeb36e2-025f-4b68-9fa2-8d17649246c6_playbook.yaml", > "", > "[2018-09-21 08:18:03,082] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-09-21 08:18:03,082] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/cdeb36e2-025f-4b68-9fa2-8d17649246c6.json < /var/lib/heat-config/deployed/cdeb36e2-025f-4b68-9fa2-8d17649246c6.notify.json", > "[2018-09-21 08:18:03,480] (heat-config) [INFO] ", > "[2018-09-21 08:18:03,480] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment ComputeHostPrepDeployment] ***************** >Friday 21 September 2018 08:18:03 -0400 (0:00:00.083) 0:01:26.086 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:18:03 -0400 (0:00:00.040) 0:01:26.127 ****** >ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "f9c74fc9-d904-47c7-916b-40d995cb0032"}, "changed": false} > >TASK [Render deployment file for ComputeArtifactsDeploy] *********************** >Friday 21 September 2018 08:18:03 -0400 (0:00:00.088) 0:01:26.216 ****** >changed: [compute-0] => {"changed": true, "checksum": "f2c0fbdc07c081ae7f0dc3994abbe1458f805adc", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeArtifactsDeploy-f9c74fc9-d904-47c7-916b-40d995cb0032", "gid": 0, "group": "root", "md5sum": "f62df66ba8eb9aa41771bf545f7b7448", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2015, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532283.8-272612946196062/source", "state": "file", "uid": 0} > >TASK [Check if deployed file exists for ComputeArtifactsDeploy] **************** >Friday 21 September 2018 08:18:04 -0400 (0:00:00.532) 0:01:26.748 ****** >ok: [compute-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for ComputeArtifactsDeploy] ***************** >Friday 21 September 2018 08:18:04 -0400 (0:00:00.220) 0:01:26.969 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for ComputeArtifactsDeploy when previous deployment failed] *** >Friday 21 September 2018 08:18:04 -0400 (0:00:00.051) 0:01:27.020 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for ComputeArtifactsDeploy] ******************* >Friday 21 September 2018 08:18:04 -0400 (0:00:00.051) 0:01:27.072 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment ComputeArtifactsDeploy] *********************************** >Friday 21 September 2018 08:18:04 -0400 (0:00:00.053) 0:01:27.126 ****** >changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/f9c74fc9-d904-47c7-916b-40d995cb0032.notify.json)", "delta": "0:00:00.467796", "end": "2018-09-21 08:18:05.329507", "rc": 0, "start": "2018-09-21 08:18:04.861711", "stderr": "[2018-09-21 08:18:04,887] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f9c74fc9-d904-47c7-916b-40d995cb0032.json\n[2018-09-21 08:18:04,919] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-09-21 08:18:04,919] (heat-config) [DEBUG] [2018-09-21 08:18:04,909] (heat-config) [INFO] artifact_urls=\n[2018-09-21 08:18:04,910] (heat-config) [INFO] deploy_server_id=41df08ab-b98d-4a4e-b91d-2da74cba2af6\n[2018-09-21 08:18:04,910] (heat-config) [INFO] deploy_action=CREATE\n[2018-09-21 08:18:04,910] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-htpxhfrkye7c-ComputeArtifactsDeploy-uy3xoa22xtxu-0-eyrzm2kmjhtx/f29dfd49-9a83-478f-9dad-230b59f0c048\n[2018-09-21 08:18:04,910] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-09-21 08:18:04,910] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-09-21 08:18:04,910] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f9c74fc9-d904-47c7-916b-40d995cb0032\n[2018-09-21 08:18:04,915] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-09-21 08:18:04,915] (heat-config) [DEBUG] \n[2018-09-21 08:18:04,915] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f9c74fc9-d904-47c7-916b-40d995cb0032\n\n[2018-09-21 08:18:04,919] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-09-21 08:18:04,919] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f9c74fc9-d904-47c7-916b-40d995cb0032.json < /var/lib/heat-config/deployed/f9c74fc9-d904-47c7-916b-40d995cb0032.notify.json\n[2018-09-21 08:18:05,323] (heat-config) [INFO] \n[2018-09-21 08:18:05,323] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:18:04,887] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f9c74fc9-d904-47c7-916b-40d995cb0032.json", "[2018-09-21 08:18:04,919] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-09-21 08:18:04,919] (heat-config) [DEBUG] [2018-09-21 08:18:04,909] (heat-config) [INFO] artifact_urls=", "[2018-09-21 08:18:04,910] (heat-config) [INFO] deploy_server_id=41df08ab-b98d-4a4e-b91d-2da74cba2af6", "[2018-09-21 08:18:04,910] (heat-config) [INFO] deploy_action=CREATE", "[2018-09-21 08:18:04,910] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-htpxhfrkye7c-ComputeArtifactsDeploy-uy3xoa22xtxu-0-eyrzm2kmjhtx/f29dfd49-9a83-478f-9dad-230b59f0c048", "[2018-09-21 08:18:04,910] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-09-21 08:18:04,910] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-09-21 08:18:04,910] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f9c74fc9-d904-47c7-916b-40d995cb0032", "[2018-09-21 08:18:04,915] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-09-21 08:18:04,915] (heat-config) [DEBUG] ", "[2018-09-21 08:18:04,915] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f9c74fc9-d904-47c7-916b-40d995cb0032", "", "[2018-09-21 08:18:04,919] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-09-21 08:18:04,919] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f9c74fc9-d904-47c7-916b-40d995cb0032.json < /var/lib/heat-config/deployed/f9c74fc9-d904-47c7-916b-40d995cb0032.notify.json", "[2018-09-21 08:18:05,323] (heat-config) [INFO] ", "[2018-09-21 08:18:05,323] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > >TASK [Output for ComputeArtifactsDeploy] *************************************** >Friday 21 September 2018 08:18:05 -0400 (0:00:00.714) 0:01:27.841 ****** >ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:18:04,887] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f9c74fc9-d904-47c7-916b-40d995cb0032.json", > "[2018-09-21 08:18:04,919] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-09-21 08:18:04,919] (heat-config) [DEBUG] [2018-09-21 08:18:04,909] (heat-config) [INFO] artifact_urls=", > "[2018-09-21 08:18:04,910] (heat-config) [INFO] deploy_server_id=41df08ab-b98d-4a4e-b91d-2da74cba2af6", > "[2018-09-21 08:18:04,910] (heat-config) [INFO] deploy_action=CREATE", > "[2018-09-21 08:18:04,910] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-htpxhfrkye7c-ComputeArtifactsDeploy-uy3xoa22xtxu-0-eyrzm2kmjhtx/f29dfd49-9a83-478f-9dad-230b59f0c048", > "[2018-09-21 08:18:04,910] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-09-21 08:18:04,910] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-09-21 08:18:04,910] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f9c74fc9-d904-47c7-916b-40d995cb0032", > "[2018-09-21 08:18:04,915] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-09-21 08:18:04,915] (heat-config) [DEBUG] ", > "[2018-09-21 08:18:04,915] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f9c74fc9-d904-47c7-916b-40d995cb0032", > "", > "[2018-09-21 08:18:04,919] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-09-21 08:18:04,919] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f9c74fc9-d904-47c7-916b-40d995cb0032.json < /var/lib/heat-config/deployed/f9c74fc9-d904-47c7-916b-40d995cb0032.notify.json", > "[2018-09-21 08:18:05,323] (heat-config) [INFO] ", > "[2018-09-21 08:18:05,323] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment ComputeArtifactsDeploy] ******************** >Friday 21 September 2018 08:18:05 -0400 (0:00:00.074) 0:01:27.916 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:18:05 -0400 (0:00:00.040) 0:01:27.956 ****** >ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "9330514b-92be-462a-90a5-a3a213aaa5f8"}, "changed": false} > >TASK [Render deployment file for CephStorageUpgradeInitDeployment] ************* >Friday 21 September 2018 08:18:05 -0400 (0:00:00.091) 0:01:28.048 ****** >changed: [ceph-0] => {"changed": true, "checksum": "6129e9fe3ea0f5dcb1843374cef56d205c99b8fa", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageUpgradeInitDeployment-9330514b-92be-462a-90a5-a3a213aaa5f8", "gid": 0, "group": "root", "md5sum": "e5d21f8e31f8a76bdd42e0c9e7bea295", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1186, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532285.62-232002579113813/source", "state": "file", "uid": 0} > >TASK [Check if deployed file exists for CephStorageUpgradeInitDeployment] ****** >Friday 21 September 2018 08:18:06 -0400 (0:00:00.523) 0:01:28.571 ****** >ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for CephStorageUpgradeInitDeployment] ******* >Friday 21 September 2018 08:18:06 -0400 (0:00:00.227) 0:01:28.799 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for CephStorageUpgradeInitDeployment when previous deployment failed] *** >Friday 21 September 2018 08:18:06 -0400 (0:00:00.042) 0:01:28.842 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for CephStorageUpgradeInitDeployment] ********* >Friday 21 September 2018 08:18:06 -0400 (0:00:00.042) 0:01:28.885 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment CephStorageUpgradeInitDeployment] ************************* >Friday 21 September 2018 08:18:06 -0400 (0:00:00.051) 0:01:28.936 ****** >changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/9330514b-92be-462a-90a5-a3a213aaa5f8.notify.json)", "delta": "0:00:00.485454", "end": "2018-09-21 08:18:06.067815", "rc": 0, "start": "2018-09-21 08:18:05.582361", "stderr": "[2018-09-21 08:18:05,604] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/9330514b-92be-462a-90a5-a3a213aaa5f8.json\n[2018-09-21 08:18:05,634] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-09-21 08:18:05,635] (heat-config) [DEBUG] [2018-09-21 08:18:05,627] (heat-config) [INFO] deploy_server_id=f4c5361d-7430-47e6-b3a8-908850a79a79\n[2018-09-21 08:18:05,628] (heat-config) [INFO] deploy_action=CREATE\n[2018-09-21 08:18:05,628] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-bre5qklthc6f-0-gxpifqih23ze-CephStorageUpgradeInitDeployment-gnb6uzc464qv/e2130fe2-b503-40ac-b8b0-f526ab3770e4\n[2018-09-21 08:18:05,628] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-09-21 08:18:05,628] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-09-21 08:18:05,628] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/9330514b-92be-462a-90a5-a3a213aaa5f8\n[2018-09-21 08:18:05,631] (heat-config) [INFO] \n[2018-09-21 08:18:05,631] (heat-config) [DEBUG] \n[2018-09-21 08:18:05,631] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/9330514b-92be-462a-90a5-a3a213aaa5f8\n\n[2018-09-21 08:18:05,635] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-09-21 08:18:05,635] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/9330514b-92be-462a-90a5-a3a213aaa5f8.json < /var/lib/heat-config/deployed/9330514b-92be-462a-90a5-a3a213aaa5f8.notify.json\n[2018-09-21 08:18:06,061] (heat-config) [INFO] \n[2018-09-21 08:18:06,061] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:18:05,604] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/9330514b-92be-462a-90a5-a3a213aaa5f8.json", "[2018-09-21 08:18:05,634] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-09-21 08:18:05,635] (heat-config) [DEBUG] [2018-09-21 08:18:05,627] (heat-config) [INFO] deploy_server_id=f4c5361d-7430-47e6-b3a8-908850a79a79", "[2018-09-21 08:18:05,628] (heat-config) [INFO] deploy_action=CREATE", "[2018-09-21 08:18:05,628] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-bre5qklthc6f-0-gxpifqih23ze-CephStorageUpgradeInitDeployment-gnb6uzc464qv/e2130fe2-b503-40ac-b8b0-f526ab3770e4", "[2018-09-21 08:18:05,628] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-09-21 08:18:05,628] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-09-21 08:18:05,628] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/9330514b-92be-462a-90a5-a3a213aaa5f8", "[2018-09-21 08:18:05,631] (heat-config) [INFO] ", "[2018-09-21 08:18:05,631] (heat-config) [DEBUG] ", "[2018-09-21 08:18:05,631] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/9330514b-92be-462a-90a5-a3a213aaa5f8", "", "[2018-09-21 08:18:05,635] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-09-21 08:18:05,635] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/9330514b-92be-462a-90a5-a3a213aaa5f8.json < /var/lib/heat-config/deployed/9330514b-92be-462a-90a5-a3a213aaa5f8.notify.json", "[2018-09-21 08:18:06,061] (heat-config) [INFO] ", "[2018-09-21 08:18:06,061] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > >TASK [Output for CephStorageUpgradeInitDeployment] ***************************** >Friday 21 September 2018 08:18:07 -0400 (0:00:00.702) 0:01:29.639 ****** >ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:18:05,604] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/9330514b-92be-462a-90a5-a3a213aaa5f8.json", > "[2018-09-21 08:18:05,634] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-09-21 08:18:05,635] (heat-config) [DEBUG] [2018-09-21 08:18:05,627] (heat-config) [INFO] deploy_server_id=f4c5361d-7430-47e6-b3a8-908850a79a79", > "[2018-09-21 08:18:05,628] (heat-config) [INFO] deploy_action=CREATE", > "[2018-09-21 08:18:05,628] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-bre5qklthc6f-0-gxpifqih23ze-CephStorageUpgradeInitDeployment-gnb6uzc464qv/e2130fe2-b503-40ac-b8b0-f526ab3770e4", > "[2018-09-21 08:18:05,628] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-09-21 08:18:05,628] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-09-21 08:18:05,628] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/9330514b-92be-462a-90a5-a3a213aaa5f8", > "[2018-09-21 08:18:05,631] (heat-config) [INFO] ", > "[2018-09-21 08:18:05,631] (heat-config) [DEBUG] ", > "[2018-09-21 08:18:05,631] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/9330514b-92be-462a-90a5-a3a213aaa5f8", > "", > "[2018-09-21 08:18:05,635] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-09-21 08:18:05,635] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/9330514b-92be-462a-90a5-a3a213aaa5f8.json < /var/lib/heat-config/deployed/9330514b-92be-462a-90a5-a3a213aaa5f8.notify.json", > "[2018-09-21 08:18:06,061] (heat-config) [INFO] ", > "[2018-09-21 08:18:06,061] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment CephStorageUpgradeInitDeployment] ********** >Friday 21 September 2018 08:18:07 -0400 (0:00:00.078) 0:01:29.717 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:18:07 -0400 (0:00:00.037) 0:01:29.755 ****** >ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "3c98b5df-069e-475e-9880-691d9cdb6305"}, "changed": false} > >TASK [Render deployment file for CephStorageDeployment] ************************ >Friday 21 September 2018 08:18:07 -0400 (0:00:00.124) 0:01:29.879 ****** >changed: [ceph-0] => {"changed": true, "checksum": "15d9292bdd0a2fde1b5bfd399151083ba33eab79", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageDeployment-3c98b5df-069e-475e-9880-691d9cdb6305", "gid": 0, "group": "root", "md5sum": "8cf653b5e731aa3e92f2dca6a8a50e9c", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 9076, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532287.5-7944798840538/source", "state": "file", "uid": 0} > >TASK [Check if deployed file exists for CephStorageDeployment] ***************** >Friday 21 September 2018 08:18:07 -0400 (0:00:00.602) 0:01:30.481 ****** >ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for CephStorageDeployment] ****************** >Friday 21 September 2018 08:18:08 -0400 (0:00:00.230) 0:01:30.712 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for CephStorageDeployment when previous deployment failed] *** >Friday 21 September 2018 08:18:08 -0400 (0:00:00.042) 0:01:30.754 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for CephStorageDeployment] ******************** >Friday 21 September 2018 08:18:08 -0400 (0:00:00.043) 0:01:30.798 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment CephStorageDeployment] ************************************ >Friday 21 September 2018 08:18:08 -0400 (0:00:00.046) 0:01:30.845 ****** >changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/3c98b5df-069e-475e-9880-691d9cdb6305.notify.json)", "delta": "0:00:00.597671", "end": "2018-09-21 08:18:08.096163", "rc": 0, "start": "2018-09-21 08:18:07.498492", "stderr": "[2018-09-21 08:18:07,525] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/3c98b5df-069e-475e-9880-691d9cdb6305.json\n[2018-09-21 08:18:07,665] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-09-21 08:18:07,666] (heat-config) [DEBUG] \n[2018-09-21 08:18:07,666] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-09-21 08:18:07,666] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/3c98b5df-069e-475e-9880-691d9cdb6305.json < /var/lib/heat-config/deployed/3c98b5df-069e-475e-9880-691d9cdb6305.notify.json\n[2018-09-21 08:18:08,089] (heat-config) [INFO] \n[2018-09-21 08:18:08,090] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:18:07,525] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/3c98b5df-069e-475e-9880-691d9cdb6305.json", "[2018-09-21 08:18:07,665] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-09-21 08:18:07,666] (heat-config) [DEBUG] ", "[2018-09-21 08:18:07,666] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-09-21 08:18:07,666] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/3c98b5df-069e-475e-9880-691d9cdb6305.json < /var/lib/heat-config/deployed/3c98b5df-069e-475e-9880-691d9cdb6305.notify.json", "[2018-09-21 08:18:08,089] (heat-config) [INFO] ", "[2018-09-21 08:18:08,090] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > >TASK [Output for CephStorageDeployment] **************************************** >Friday 21 September 2018 08:18:09 -0400 (0:00:00.821) 0:01:31.666 ****** >ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:18:07,525] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/3c98b5df-069e-475e-9880-691d9cdb6305.json", > "[2018-09-21 08:18:07,665] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-09-21 08:18:07,666] (heat-config) [DEBUG] ", > "[2018-09-21 08:18:07,666] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-09-21 08:18:07,666] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/3c98b5df-069e-475e-9880-691d9cdb6305.json < /var/lib/heat-config/deployed/3c98b5df-069e-475e-9880-691d9cdb6305.notify.json", > "[2018-09-21 08:18:08,089] (heat-config) [INFO] ", > "[2018-09-21 08:18:08,090] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment CephStorageDeployment] ********************* >Friday 21 September 2018 08:18:09 -0400 (0:00:00.078) 0:01:31.745 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:18:09 -0400 (0:00:00.037) 0:01:31.783 ****** >ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "0b0d3a1e-bb12-4c72-b0fa-fc6fb24b52ff"}, "changed": false} > >TASK [Render deployment file for CephStorageHostsDeployment] ******************* >Friday 21 September 2018 08:18:09 -0400 (0:00:00.089) 0:01:31.872 ****** >changed: [ceph-0] => {"changed": true, "checksum": "9ec2ee073efe2a010efc92691d001e5b9b40ca03", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageHostsDeployment-0b0d3a1e-bb12-4c72-b0fa-fc6fb24b52ff", "gid": 0, "group": "root", "md5sum": "68deb1f2d3ece2c7a860aa30c13d6b10", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4427, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532289.46-134851825779435/source", "state": "file", "uid": 0} > >TASK [Check if deployed file exists for CephStorageHostsDeployment] ************ >Friday 21 September 2018 08:18:09 -0400 (0:00:00.574) 0:01:32.447 ****** >ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for CephStorageHostsDeployment] ************* >Friday 21 September 2018 08:18:10 -0400 (0:00:00.222) 0:01:32.669 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for CephStorageHostsDeployment when previous deployment failed] *** >Friday 21 September 2018 08:18:10 -0400 (0:00:00.040) 0:01:32.710 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for CephStorageHostsDeployment] *************** >Friday 21 September 2018 08:18:10 -0400 (0:00:00.038) 0:01:32.749 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment CephStorageHostsDeployment] ******************************* >Friday 21 September 2018 08:18:10 -0400 (0:00:00.037) 0:01:32.787 ****** >changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/0b0d3a1e-bb12-4c72-b0fa-fc6fb24b52ff.notify.json)", "delta": "0:00:00.500349", "end": "2018-09-21 08:18:09.941215", "rc": 0, "start": "2018-09-21 08:18:09.440866", "stderr": "[2018-09-21 08:18:09,467] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/0b0d3a1e-bb12-4c72-b0fa-fc6fb24b52ff.json\n[2018-09-21 08:18:09,521] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-09-21 08:18:09,522] (heat-config) [DEBUG] [2018-09-21 08:18:09,491] (heat-config) [INFO] hosts=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-09-21 08:18:09,491] (heat-config) [INFO] deploy_server_id=f4c5361d-7430-47e6-b3a8-908850a79a79\n[2018-09-21 08:18:09,491] (heat-config) [INFO] deploy_action=CREATE\n[2018-09-21 08:18:09,491] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-hdyfr25l7hic-0-z3br5ylt2w4p/471be5f5-3218-4357-8374-0acb15dadb68\n[2018-09-21 08:18:09,492] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-09-21 08:18:09,492] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-09-21 08:18:09,492] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/0b0d3a1e-bb12-4c72-b0fa-fc6fb24b52ff\n[2018-09-21 08:18:09,517] (heat-config) [INFO] \n[2018-09-21 08:18:09,517] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n++ hostname -s\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n++ hostname -s\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n++ hostname -s\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n++ hostname -s\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n++ hostname -s\n+ sed -i /ceph-0/d /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.13 overcloud.storagemgmt.localdomain\n172.17.1.15 overcloud.internalapi.localdomain\n10.0.0.111 overcloud.localdomain\n172.17.1.17 controller-0.localdomain controller-0\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.117 controller-0.external.localdomain controller-0.external\n192.168.24.18 controller-0.management.localdomain controller-0.management\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.12 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.8 compute-0.external.localdomain compute-0.external\n192.168.24.8 compute-0.management.localdomain compute-0.management\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.11 ceph-0.localdomain ceph-0\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-09-21 08:18:09,517] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/0b0d3a1e-bb12-4c72-b0fa-fc6fb24b52ff\n\n[2018-09-21 08:18:09,522] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-09-21 08:18:09,523] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0b0d3a1e-bb12-4c72-b0fa-fc6fb24b52ff.json < /var/lib/heat-config/deployed/0b0d3a1e-bb12-4c72-b0fa-fc6fb24b52ff.notify.json\n[2018-09-21 08:18:09,934] (heat-config) [INFO] \n[2018-09-21 08:18:09,934] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:18:09,467] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/0b0d3a1e-bb12-4c72-b0fa-fc6fb24b52ff.json", "[2018-09-21 08:18:09,521] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-09-21 08:18:09,522] (heat-config) [DEBUG] [2018-09-21 08:18:09,491] (heat-config) [INFO] hosts=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-09-21 08:18:09,491] (heat-config) [INFO] deploy_server_id=f4c5361d-7430-47e6-b3a8-908850a79a79", "[2018-09-21 08:18:09,491] (heat-config) [INFO] deploy_action=CREATE", "[2018-09-21 08:18:09,491] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-hdyfr25l7hic-0-z3br5ylt2w4p/471be5f5-3218-4357-8374-0acb15dadb68", "[2018-09-21 08:18:09,492] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-09-21 08:18:09,492] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-09-21 08:18:09,492] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/0b0d3a1e-bb12-4c72-b0fa-fc6fb24b52ff", "[2018-09-21 08:18:09,517] (heat-config) [INFO] ", "[2018-09-21 08:18:09,517] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "++ hostname -s", "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "++ hostname -s", "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "++ hostname -s", "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "++ hostname -s", "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "++ hostname -s", "+ sed -i /ceph-0/d /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.13 overcloud.storagemgmt.localdomain", "172.17.1.15 overcloud.internalapi.localdomain", "10.0.0.111 overcloud.localdomain", "172.17.1.17 controller-0.localdomain controller-0", "172.17.3.16 controller-0.storage.localdomain controller-0.storage", "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.117 controller-0.external.localdomain controller-0.external", "192.168.24.18 controller-0.management.localdomain controller-0.management", "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.12 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.8 compute-0.external.localdomain compute-0.external", "192.168.24.8 compute-0.management.localdomain compute-0.management", "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.11 ceph-0.localdomain ceph-0", "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.6 ceph-0.external.localdomain ceph-0.external", "192.168.24.6 ceph-0.management.localdomain ceph-0.management", "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-09-21 08:18:09,517] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/0b0d3a1e-bb12-4c72-b0fa-fc6fb24b52ff", "", "[2018-09-21 08:18:09,522] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-09-21 08:18:09,523] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0b0d3a1e-bb12-4c72-b0fa-fc6fb24b52ff.json < /var/lib/heat-config/deployed/0b0d3a1e-bb12-4c72-b0fa-fc6fb24b52ff.notify.json", "[2018-09-21 08:18:09,934] (heat-config) [INFO] ", "[2018-09-21 08:18:09,934] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > >TASK [Output for CephStorageHostsDeployment] *********************************** >Friday 21 September 2018 08:18:11 -0400 (0:00:00.768) 0:01:33.555 ****** >ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:18:09,467] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/0b0d3a1e-bb12-4c72-b0fa-fc6fb24b52ff.json", > "[2018-09-21 08:18:09,521] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.7 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.13 overcloud.storagemgmt.localdomain\\n172.17.1.15 overcloud.internalapi.localdomain\\n10.0.0.111 overcloud.localdomain\\n172.17.1.17 controller-0.localdomain controller-0\\n172.17.3.16 controller-0.storage.localdomain controller-0.storage\\n172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.22 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.117 controller-0.external.localdomain controller-0.external\\n192.168.24.18 controller-0.management.localdomain controller-0.management\\n192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.12 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.21 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.8 compute-0.external.localdomain compute-0.external\\n192.168.24.8 compute-0.management.localdomain compute-0.management\\n192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.11 ceph-0.localdomain ceph-0\\n172.17.3.11 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.6 ceph-0.external.localdomain ceph-0.external\\n192.168.24.6 ceph-0.management.localdomain ceph-0.management\\n192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-09-21 08:18:09,522] (heat-config) [DEBUG] [2018-09-21 08:18:09,491] (heat-config) [INFO] hosts=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-09-21 08:18:09,491] (heat-config) [INFO] deploy_server_id=f4c5361d-7430-47e6-b3a8-908850a79a79", > "[2018-09-21 08:18:09,491] (heat-config) [INFO] deploy_action=CREATE", > "[2018-09-21 08:18:09,491] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-hdyfr25l7hic-0-z3br5ylt2w4p/471be5f5-3218-4357-8374-0acb15dadb68", > "[2018-09-21 08:18:09,492] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-09-21 08:18:09,492] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-09-21 08:18:09,492] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/0b0d3a1e-bb12-4c72-b0fa-fc6fb24b52ff", > "[2018-09-21 08:18:09,517] (heat-config) [INFO] ", > "[2018-09-21 08:18:09,517] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.7 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.13 overcloud.storagemgmt.localdomain", > "172.17.1.15 overcloud.internalapi.localdomain", > "10.0.0.111 overcloud.localdomain", > "172.17.1.17 controller-0.localdomain controller-0", > "172.17.3.16 controller-0.storage.localdomain controller-0.storage", > "172.17.4.14 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.17 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.22 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.117 controller-0.external.localdomain controller-0.external", > "192.168.24.18 controller-0.management.localdomain controller-0.management", > "192.168.24.18 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.12 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.8 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.12 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.21 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.8 compute-0.external.localdomain compute-0.external", > "192.168.24.8 compute-0.management.localdomain compute-0.management", > "192.168.24.8 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.11 ceph-0.localdomain ceph-0", > "172.17.3.11 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.15 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.6 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.6 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.6 ceph-0.external.localdomain ceph-0.external", > "192.168.24.6 ceph-0.management.localdomain ceph-0.management", > "192.168.24.6 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-09-21 08:18:09,517] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/0b0d3a1e-bb12-4c72-b0fa-fc6fb24b52ff", > "", > "[2018-09-21 08:18:09,522] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-09-21 08:18:09,523] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0b0d3a1e-bb12-4c72-b0fa-fc6fb24b52ff.json < /var/lib/heat-config/deployed/0b0d3a1e-bb12-4c72-b0fa-fc6fb24b52ff.notify.json", > "[2018-09-21 08:18:09,934] (heat-config) [INFO] ", > "[2018-09-21 08:18:09,934] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment CephStorageHostsDeployment] **************** >Friday 21 September 2018 08:18:11 -0400 (0:00:00.121) 0:01:33.677 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:18:11 -0400 (0:00:00.037) 0:01:33.714 ****** >ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "3b1b75f1-1b8d-4d2d-ba0a-1e82ab7ee903"}, "changed": false} > >TASK [Render deployment file for CephStorageAllNodesDeployment] **************** >Friday 21 September 2018 08:18:11 -0400 (0:00:00.242) 0:01:33.957 ****** >changed: [ceph-0] => {"changed": true, "checksum": "c24189cec2ffe8e2c765618596b6199b0a6faa90", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageAllNodesDeployment-3b1b75f1-1b8d-4d2d-ba0a-1e82ab7ee903", "gid": 0, "group": "root", "md5sum": "9e8fe551b23a5c211536df597a9ccb8e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19532, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532291.7-72454485650880/source", "state": "file", "uid": 0} > >TASK [Check if deployed file exists for CephStorageAllNodesDeployment] ********* >Friday 21 September 2018 08:18:12 -0400 (0:00:00.688) 0:01:34.646 ****** >ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for CephStorageAllNodesDeployment] ********** >Friday 21 September 2018 08:18:12 -0400 (0:00:00.267) 0:01:34.913 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for CephStorageAllNodesDeployment when previous deployment failed] *** >Friday 21 September 2018 08:18:12 -0400 (0:00:00.039) 0:01:34.952 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for CephStorageAllNodesDeployment] ************ >Friday 21 September 2018 08:18:12 -0400 (0:00:00.040) 0:01:34.992 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment CephStorageAllNodesDeployment] **************************** >Friday 21 September 2018 08:18:12 -0400 (0:00:00.041) 0:01:35.034 ****** >changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/3b1b75f1-1b8d-4d2d-ba0a-1e82ab7ee903.notify.json)", "delta": "0:00:00.564222", "end": "2018-09-21 08:18:12.293044", "rc": 0, "start": "2018-09-21 08:18:11.728822", "stderr": "[2018-09-21 08:18:11,752] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/3b1b75f1-1b8d-4d2d-ba0a-1e82ab7ee903.json\n[2018-09-21 08:18:11,878] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-09-21 08:18:11,878] (heat-config) [DEBUG] \n[2018-09-21 08:18:11,878] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-09-21 08:18:11,878] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/3b1b75f1-1b8d-4d2d-ba0a-1e82ab7ee903.json < /var/lib/heat-config/deployed/3b1b75f1-1b8d-4d2d-ba0a-1e82ab7ee903.notify.json\n[2018-09-21 08:18:12,286] (heat-config) [INFO] \n[2018-09-21 08:18:12,286] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:18:11,752] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/3b1b75f1-1b8d-4d2d-ba0a-1e82ab7ee903.json", "[2018-09-21 08:18:11,878] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-09-21 08:18:11,878] (heat-config) [DEBUG] ", "[2018-09-21 08:18:11,878] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-09-21 08:18:11,878] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/3b1b75f1-1b8d-4d2d-ba0a-1e82ab7ee903.json < /var/lib/heat-config/deployed/3b1b75f1-1b8d-4d2d-ba0a-1e82ab7ee903.notify.json", "[2018-09-21 08:18:12,286] (heat-config) [INFO] ", "[2018-09-21 08:18:12,286] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > >TASK [Output for CephStorageAllNodesDeployment] ******************************** >Friday 21 September 2018 08:18:13 -0400 (0:00:00.826) 0:01:35.860 ****** >ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:18:11,752] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/3b1b75f1-1b8d-4d2d-ba0a-1e82ab7ee903.json", > "[2018-09-21 08:18:11,878] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-09-21 08:18:11,878] (heat-config) [DEBUG] ", > "[2018-09-21 08:18:11,878] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-09-21 08:18:11,878] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/3b1b75f1-1b8d-4d2d-ba0a-1e82ab7ee903.json < /var/lib/heat-config/deployed/3b1b75f1-1b8d-4d2d-ba0a-1e82ab7ee903.notify.json", > "[2018-09-21 08:18:12,286] (heat-config) [INFO] ", > "[2018-09-21 08:18:12,286] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment CephStorageAllNodesDeployment] ************* >Friday 21 September 2018 08:18:13 -0400 (0:00:00.078) 0:01:35.939 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:18:13 -0400 (0:00:00.038) 0:01:35.977 ****** >ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "a763de33-e87c-4ede-8a10-3a53186191a1"}, "changed": false} > >TASK [Render deployment file for CephStorageAllNodesValidationDeployment] ****** >Friday 21 September 2018 08:18:13 -0400 (0:00:00.088) 0:01:36.066 ****** >changed: [ceph-0] => {"changed": true, "checksum": "f6af455cda57bed7b777a12c748bc62d9eff11f8", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageAllNodesValidationDeployment-a763de33-e87c-4ede-8a10-3a53186191a1", "gid": 0, "group": "root", "md5sum": "f35ab47ea3a6445bedb5600c84d595ce", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4943, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532293.65-22030783485041/source", "state": "file", "uid": 0} > >TASK [Check if deployed file exists for CephStorageAllNodesValidationDeployment] *** >Friday 21 September 2018 08:18:14 -0400 (0:00:00.489) 0:01:36.556 ****** >ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for CephStorageAllNodesValidationDeployment] *** >Friday 21 September 2018 08:18:14 -0400 (0:00:00.194) 0:01:36.751 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for CephStorageAllNodesValidationDeployment when previous deployment failed] *** >Friday 21 September 2018 08:18:14 -0400 (0:00:00.040) 0:01:36.791 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for CephStorageAllNodesValidationDeployment] *** >Friday 21 September 2018 08:18:14 -0400 (0:00:00.044) 0:01:36.836 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment CephStorageAllNodesValidationDeployment] ****************** >Friday 21 September 2018 08:18:14 -0400 (0:00:00.045) 0:01:36.882 ****** >changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/a763de33-e87c-4ede-8a10-3a53186191a1.notify.json)", "delta": "0:00:00.949974", "end": "2018-09-21 08:18:14.484319", "rc": 0, "start": "2018-09-21 08:18:13.534345", "stderr": "[2018-09-21 08:18:13,559] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/a763de33-e87c-4ede-8a10-3a53186191a1.json\n[2018-09-21 08:18:14,120] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.117 for local network 10.0.0.0/24.\\nPing to 10.0.0.117 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.16 for local network 172.17.3.0/24.\\nPing to 172.17.3.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.14 for local network 172.17.4.0/24.\\nPing to 172.17.4.14 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.18 for local network 192.168.24.0/24.\\nPing to 192.168.24.18 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-09-21 08:18:14,120] (heat-config) [DEBUG] [2018-09-21 08:18:13,581] (heat-config) [INFO] ping_test_ips=172.17.3.16 172.17.4.14 172.17.1.17 172.17.2.22 10.0.0.117 192.168.24.18\n[2018-09-21 08:18:13,581] (heat-config) [INFO] validate_fqdn=False\n[2018-09-21 08:18:13,581] (heat-config) [INFO] validate_ntp=True\n[2018-09-21 08:18:13,581] (heat-config) [INFO] deploy_server_id=f4c5361d-7430-47e6-b3a8-908850a79a79\n[2018-09-21 08:18:13,581] (heat-config) [INFO] deploy_action=CREATE\n[2018-09-21 08:18:13,581] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-zfqiwqxjo2hv-0-zmrbukdniijs/3ddcf7a8-1e93-424d-991e-02435f8c53c7\n[2018-09-21 08:18:13,581] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-09-21 08:18:13,581] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-09-21 08:18:13,582] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/a763de33-e87c-4ede-8a10-3a53186191a1\n[2018-09-21 08:18:14,116] (heat-config) [INFO] Trying to ping 10.0.0.117 for local network 10.0.0.0/24.\nPing to 10.0.0.117 succeeded.\nSUCCESS\nTrying to ping 172.17.3.16 for local network 172.17.3.0/24.\nPing to 172.17.3.16 succeeded.\nSUCCESS\nTrying to ping 172.17.4.14 for local network 172.17.4.0/24.\nPing to 172.17.4.14 succeeded.\nSUCCESS\nTrying to ping 192.168.24.18 for local network 192.168.24.0/24.\nPing to 192.168.24.18 succeeded.\nSUCCESS\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\nSUCCESS\n\n[2018-09-21 08:18:14,117] (heat-config) [DEBUG] \n[2018-09-21 08:18:14,117] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/a763de33-e87c-4ede-8a10-3a53186191a1\n\n[2018-09-21 08:18:14,120] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-09-21 08:18:14,121] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a763de33-e87c-4ede-8a10-3a53186191a1.json < /var/lib/heat-config/deployed/a763de33-e87c-4ede-8a10-3a53186191a1.notify.json\n[2018-09-21 08:18:14,478] (heat-config) [INFO] \n[2018-09-21 08:18:14,478] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:18:13,559] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/a763de33-e87c-4ede-8a10-3a53186191a1.json", "[2018-09-21 08:18:14,120] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.117 for local network 10.0.0.0/24.\\nPing to 10.0.0.117 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.16 for local network 172.17.3.0/24.\\nPing to 172.17.3.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.14 for local network 172.17.4.0/24.\\nPing to 172.17.4.14 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.18 for local network 192.168.24.0/24.\\nPing to 192.168.24.18 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-09-21 08:18:14,120] (heat-config) [DEBUG] [2018-09-21 08:18:13,581] (heat-config) [INFO] ping_test_ips=172.17.3.16 172.17.4.14 172.17.1.17 172.17.2.22 10.0.0.117 192.168.24.18", "[2018-09-21 08:18:13,581] (heat-config) [INFO] validate_fqdn=False", "[2018-09-21 08:18:13,581] (heat-config) [INFO] validate_ntp=True", "[2018-09-21 08:18:13,581] (heat-config) [INFO] deploy_server_id=f4c5361d-7430-47e6-b3a8-908850a79a79", "[2018-09-21 08:18:13,581] (heat-config) [INFO] deploy_action=CREATE", "[2018-09-21 08:18:13,581] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-zfqiwqxjo2hv-0-zmrbukdniijs/3ddcf7a8-1e93-424d-991e-02435f8c53c7", "[2018-09-21 08:18:13,581] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-09-21 08:18:13,581] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-09-21 08:18:13,582] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/a763de33-e87c-4ede-8a10-3a53186191a1", "[2018-09-21 08:18:14,116] (heat-config) [INFO] Trying to ping 10.0.0.117 for local network 10.0.0.0/24.", "Ping to 10.0.0.117 succeeded.", "SUCCESS", "Trying to ping 172.17.3.16 for local network 172.17.3.0/24.", "Ping to 172.17.3.16 succeeded.", "SUCCESS", "Trying to ping 172.17.4.14 for local network 172.17.4.0/24.", "Ping to 172.17.4.14 succeeded.", "SUCCESS", "Trying to ping 192.168.24.18 for local network 192.168.24.0/24.", "Ping to 192.168.24.18 succeeded.", "SUCCESS", "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", "SUCCESS", "", "[2018-09-21 08:18:14,117] (heat-config) [DEBUG] ", "[2018-09-21 08:18:14,117] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/a763de33-e87c-4ede-8a10-3a53186191a1", "", "[2018-09-21 08:18:14,120] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-09-21 08:18:14,121] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a763de33-e87c-4ede-8a10-3a53186191a1.json < /var/lib/heat-config/deployed/a763de33-e87c-4ede-8a10-3a53186191a1.notify.json", "[2018-09-21 08:18:14,478] (heat-config) [INFO] ", "[2018-09-21 08:18:14,478] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > >TASK [Output for CephStorageAllNodesValidationDeployment] ********************** >Friday 21 September 2018 08:18:15 -0400 (0:00:01.172) 0:01:38.054 ****** >ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:18:13,559] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/a763de33-e87c-4ede-8a10-3a53186191a1.json", > "[2018-09-21 08:18:14,120] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.117 for local network 10.0.0.0/24.\\nPing to 10.0.0.117 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.16 for local network 172.17.3.0/24.\\nPing to 172.17.3.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.14 for local network 172.17.4.0/24.\\nPing to 172.17.4.14 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.18 for local network 192.168.24.0/24.\\nPing to 192.168.24.18 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-09-21 08:18:14,120] (heat-config) [DEBUG] [2018-09-21 08:18:13,581] (heat-config) [INFO] ping_test_ips=172.17.3.16 172.17.4.14 172.17.1.17 172.17.2.22 10.0.0.117 192.168.24.18", > "[2018-09-21 08:18:13,581] (heat-config) [INFO] validate_fqdn=False", > "[2018-09-21 08:18:13,581] (heat-config) [INFO] validate_ntp=True", > "[2018-09-21 08:18:13,581] (heat-config) [INFO] deploy_server_id=f4c5361d-7430-47e6-b3a8-908850a79a79", > "[2018-09-21 08:18:13,581] (heat-config) [INFO] deploy_action=CREATE", > "[2018-09-21 08:18:13,581] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-zfqiwqxjo2hv-0-zmrbukdniijs/3ddcf7a8-1e93-424d-991e-02435f8c53c7", > "[2018-09-21 08:18:13,581] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-09-21 08:18:13,581] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-09-21 08:18:13,582] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/a763de33-e87c-4ede-8a10-3a53186191a1", > "[2018-09-21 08:18:14,116] (heat-config) [INFO] Trying to ping 10.0.0.117 for local network 10.0.0.0/24.", > "Ping to 10.0.0.117 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.16 for local network 172.17.3.0/24.", > "Ping to 172.17.3.16 succeeded.", > "SUCCESS", > "Trying to ping 172.17.4.14 for local network 172.17.4.0/24.", > "Ping to 172.17.4.14 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.18 for local network 192.168.24.0/24.", > "Ping to 192.168.24.18 succeeded.", > "SUCCESS", > "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", > "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", > "SUCCESS", > "", > "[2018-09-21 08:18:14,117] (heat-config) [DEBUG] ", > "[2018-09-21 08:18:14,117] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/a763de33-e87c-4ede-8a10-3a53186191a1", > "", > "[2018-09-21 08:18:14,120] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-09-21 08:18:14,121] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a763de33-e87c-4ede-8a10-3a53186191a1.json < /var/lib/heat-config/deployed/a763de33-e87c-4ede-8a10-3a53186191a1.notify.json", > "[2018-09-21 08:18:14,478] (heat-config) [INFO] ", > "[2018-09-21 08:18:14,478] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment CephStorageAllNodesValidationDeployment] *** >Friday 21 September 2018 08:18:15 -0400 (0:00:00.083) 0:01:38.138 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:18:15 -0400 (0:00:00.039) 0:01:38.177 ****** >ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "b74e2ea8-29f1-4a54-83e3-c8028e8275d2"}, "changed": false} > >TASK [Render deployment file for CephStorageHostPrepDeployment] **************** >Friday 21 September 2018 08:18:15 -0400 (0:00:00.104) 0:01:38.282 ****** >changed: [ceph-0] => {"changed": true, "checksum": "64461785cdc8c9c60f5548ea99f7b738b484fff4", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageHostPrepDeployment-b74e2ea8-29f1-4a54-83e3-c8028e8275d2", "gid": 0, "group": "root", "md5sum": "9f3c454fab96fced5f6a473aa6595046", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 20802, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532295.88-45498643028383/source", "state": "file", "uid": 0} > > >TASK [Check if deployed file exists for CephStorageHostPrepDeployment] ********* >Friday 21 September 2018 08:18:16 -0400 (0:00:00.555) 0:01:38.838 ****** >ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for CephStorageHostPrepDeployment] ********** >Friday 21 September 2018 08:18:16 -0400 (0:00:00.215) 0:01:39.053 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for CephStorageHostPrepDeployment when previous deployment failed] *** >Friday 21 September 2018 08:18:16 -0400 (0:00:00.053) 0:01:39.107 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for CephStorageHostPrepDeployment] ************ >Friday 21 September 2018 08:18:16 -0400 (0:00:00.043) 0:01:39.150 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment CephStorageHostPrepDeployment] **************************** >Friday 21 September 2018 08:18:16 -0400 (0:00:00.042) 0:01:39.192 ****** >changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/b74e2ea8-29f1-4a54-83e3-c8028e8275d2.notify.json)", "delta": "0:00:06.251247", "end": "2018-09-21 08:18:22.096386", "rc": 0, "start": "2018-09-21 08:18:15.845139", "stderr": "[2018-09-21 08:18:15,872] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/b74e2ea8-29f1-4a54-83e3-c8028e8275d2.json\n[2018-09-21 08:18:21,693] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-09-21 08:18:21,693] (heat-config) [DEBUG] [2018-09-21 08:18:15,896] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/b74e2ea8-29f1-4a54-83e3-c8028e8275d2_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/b74e2ea8-29f1-4a54-83e3-c8028e8275d2_variables.json\n[2018-09-21 08:18:21,689] (heat-config) [INFO] Return code 0\n[2018-09-21 08:18:21,689] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \n\n\n[2018-09-21 08:18:21,689] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/b74e2ea8-29f1-4a54-83e3-c8028e8275d2_playbook.yaml\n\n[2018-09-21 08:18:21,693] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-09-21 08:18:21,693] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b74e2ea8-29f1-4a54-83e3-c8028e8275d2.json < /var/lib/heat-config/deployed/b74e2ea8-29f1-4a54-83e3-c8028e8275d2.notify.json\n[2018-09-21 08:18:22,089] (heat-config) [INFO] \n[2018-09-21 08:18:22,090] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:18:15,872] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/b74e2ea8-29f1-4a54-83e3-c8028e8275d2.json", "[2018-09-21 08:18:21,693] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-09-21 08:18:21,693] (heat-config) [DEBUG] [2018-09-21 08:18:15,896] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/b74e2ea8-29f1-4a54-83e3-c8028e8275d2_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/b74e2ea8-29f1-4a54-83e3-c8028e8275d2_variables.json", "[2018-09-21 08:18:21,689] (heat-config) [INFO] Return code 0", "[2018-09-21 08:18:21,689] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=3 changed=2 unreachable=0 failed=0 ", "", "", "[2018-09-21 08:18:21,689] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/b74e2ea8-29f1-4a54-83e3-c8028e8275d2_playbook.yaml", "", "[2018-09-21 08:18:21,693] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-09-21 08:18:21,693] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b74e2ea8-29f1-4a54-83e3-c8028e8275d2.json < /var/lib/heat-config/deployed/b74e2ea8-29f1-4a54-83e3-c8028e8275d2.notify.json", "[2018-09-21 08:18:22,089] (heat-config) [INFO] ", "[2018-09-21 08:18:22,090] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > >TASK [Output for CephStorageHostPrepDeployment] ******************************** >Friday 21 September 2018 08:18:23 -0400 (0:00:06.479) 0:01:45.671 ****** >ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:18:15,872] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/b74e2ea8-29f1-4a54-83e3-c8028e8275d2.json", > "[2018-09-21 08:18:21,693] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-09-21 08:18:21,693] (heat-config) [DEBUG] [2018-09-21 08:18:15,896] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/b74e2ea8-29f1-4a54-83e3-c8028e8275d2_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/b74e2ea8-29f1-4a54-83e3-c8028e8275d2_variables.json", > "[2018-09-21 08:18:21,689] (heat-config) [INFO] Return code 0", > "[2018-09-21 08:18:21,689] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=3 changed=2 unreachable=0 failed=0 ", > "", > "", > "[2018-09-21 08:18:21,689] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/b74e2ea8-29f1-4a54-83e3-c8028e8275d2_playbook.yaml", > "", > "[2018-09-21 08:18:21,693] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-09-21 08:18:21,693] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b74e2ea8-29f1-4a54-83e3-c8028e8275d2.json < /var/lib/heat-config/deployed/b74e2ea8-29f1-4a54-83e3-c8028e8275d2.notify.json", > "[2018-09-21 08:18:22,089] (heat-config) [INFO] ", > "[2018-09-21 08:18:22,090] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment CephStorageHostPrepDeployment] ************* >Friday 21 September 2018 08:18:23 -0400 (0:00:00.083) 0:01:45.755 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Lookup deployment UUID] ************************************************** >Friday 21 September 2018 08:18:23 -0400 (0:00:00.039) 0:01:45.795 ****** >ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "24cc289d-60ba-451a-b0ba-bae4f051aad2"}, "changed": false} > >TASK [Render deployment file for CephStorageArtifactsDeploy] ******************* >Friday 21 September 2018 08:18:23 -0400 (0:00:00.088) 0:01:45.883 ****** >changed: [ceph-0] => {"changed": true, "checksum": "c5e69e96061518b05d3e67f61112da066452f1e3", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageArtifactsDeploy-24cc289d-60ba-451a-b0ba-bae4f051aad2", "gid": 0, "group": "root", "md5sum": "b28009d1d57cbf822186b082b2af09da", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2023, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532303.46-160321832044124/source", "state": "file", "uid": 0} > >TASK [Check if deployed file exists for CephStorageArtifactsDeploy] ************ >Friday 21 September 2018 08:18:23 -0400 (0:00:00.538) 0:01:46.422 ****** >ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Check previous deployment rc for CephStorageArtifactsDeploy] ************* >Friday 21 September 2018 08:18:24 -0400 (0:00:00.223) 0:01:46.645 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Remove deployed file for CephStorageArtifactsDeploy when previous deployment failed] *** >Friday 21 September 2018 08:18:24 -0400 (0:00:00.048) 0:01:46.694 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Force remove deployed file for CephStorageArtifactsDeploy] *************** >Friday 21 September 2018 08:18:24 -0400 (0:00:00.045) 0:01:46.739 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run deployment CephStorageArtifactsDeploy] ******************************* >Friday 21 September 2018 08:18:24 -0400 (0:00:00.044) 0:01:46.783 ****** >changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/24cc289d-60ba-451a-b0ba-bae4f051aad2.notify.json)", "delta": "0:00:00.472819", "end": "2018-09-21 08:18:23.912199", "rc": 0, "start": "2018-09-21 08:18:23.439380", "stderr": "[2018-09-21 08:18:23,464] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/24cc289d-60ba-451a-b0ba-bae4f051aad2.json\n[2018-09-21 08:18:23,498] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-09-21 08:18:23,498] (heat-config) [DEBUG] [2018-09-21 08:18:23,488] (heat-config) [INFO] artifact_urls=\n[2018-09-21 08:18:23,488] (heat-config) [INFO] deploy_server_id=f4c5361d-7430-47e6-b3a8-908850a79a79\n[2018-09-21 08:18:23,488] (heat-config) [INFO] deploy_action=CREATE\n[2018-09-21 08:18:23,488] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-htpxhfrkye7c-CephStorageArtifactsDeploy-whprk45h5lg3-0-ogyij4txrqdf/6b306485-6cd5-498f-9773-7599877cf580\n[2018-09-21 08:18:23,488] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-09-21 08:18:23,488] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-09-21 08:18:23,488] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/24cc289d-60ba-451a-b0ba-bae4f051aad2\n[2018-09-21 08:18:23,494] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-09-21 08:18:23,494] (heat-config) [DEBUG] \n[2018-09-21 08:18:23,494] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/24cc289d-60ba-451a-b0ba-bae4f051aad2\n\n[2018-09-21 08:18:23,498] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-09-21 08:18:23,498] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/24cc289d-60ba-451a-b0ba-bae4f051aad2.json < /var/lib/heat-config/deployed/24cc289d-60ba-451a-b0ba-bae4f051aad2.notify.json\n[2018-09-21 08:18:23,905] (heat-config) [INFO] \n[2018-09-21 08:18:23,905] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-09-21 08:18:23,464] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/24cc289d-60ba-451a-b0ba-bae4f051aad2.json", "[2018-09-21 08:18:23,498] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-09-21 08:18:23,498] (heat-config) [DEBUG] [2018-09-21 08:18:23,488] (heat-config) [INFO] artifact_urls=", "[2018-09-21 08:18:23,488] (heat-config) [INFO] deploy_server_id=f4c5361d-7430-47e6-b3a8-908850a79a79", "[2018-09-21 08:18:23,488] (heat-config) [INFO] deploy_action=CREATE", "[2018-09-21 08:18:23,488] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-htpxhfrkye7c-CephStorageArtifactsDeploy-whprk45h5lg3-0-ogyij4txrqdf/6b306485-6cd5-498f-9773-7599877cf580", "[2018-09-21 08:18:23,488] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-09-21 08:18:23,488] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-09-21 08:18:23,488] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/24cc289d-60ba-451a-b0ba-bae4f051aad2", "[2018-09-21 08:18:23,494] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-09-21 08:18:23,494] (heat-config) [DEBUG] ", "[2018-09-21 08:18:23,494] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/24cc289d-60ba-451a-b0ba-bae4f051aad2", "", "[2018-09-21 08:18:23,498] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-09-21 08:18:23,498] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/24cc289d-60ba-451a-b0ba-bae4f051aad2.json < /var/lib/heat-config/deployed/24cc289d-60ba-451a-b0ba-bae4f051aad2.notify.json", "[2018-09-21 08:18:23,905] (heat-config) [INFO] ", "[2018-09-21 08:18:23,905] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} > >TASK [Output for CephStorageArtifactsDeploy] *********************************** >Friday 21 September 2018 08:18:24 -0400 (0:00:00.700) 0:01:47.484 ****** >ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-09-21 08:18:23,464] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/24cc289d-60ba-451a-b0ba-bae4f051aad2.json", > "[2018-09-21 08:18:23,498] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-09-21 08:18:23,498] (heat-config) [DEBUG] [2018-09-21 08:18:23,488] (heat-config) [INFO] artifact_urls=", > "[2018-09-21 08:18:23,488] (heat-config) [INFO] deploy_server_id=f4c5361d-7430-47e6-b3a8-908850a79a79", > "[2018-09-21 08:18:23,488] (heat-config) [INFO] deploy_action=CREATE", > "[2018-09-21 08:18:23,488] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-htpxhfrkye7c-CephStorageArtifactsDeploy-whprk45h5lg3-0-ogyij4txrqdf/6b306485-6cd5-498f-9773-7599877cf580", > "[2018-09-21 08:18:23,488] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-09-21 08:18:23,488] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-09-21 08:18:23,488] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/24cc289d-60ba-451a-b0ba-bae4f051aad2", > "[2018-09-21 08:18:23,494] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-09-21 08:18:23,494] (heat-config) [DEBUG] ", > "[2018-09-21 08:18:23,494] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/24cc289d-60ba-451a-b0ba-bae4f051aad2", > "", > "[2018-09-21 08:18:23,498] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-09-21 08:18:23,498] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/24cc289d-60ba-451a-b0ba-bae4f051aad2.json < /var/lib/heat-config/deployed/24cc289d-60ba-451a-b0ba-bae4f051aad2.notify.json", > "[2018-09-21 08:18:23,905] (heat-config) [INFO] ", > "[2018-09-21 08:18:23,905] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} > >TASK [Check-mode for Run deployment CephStorageArtifactsDeploy] **************** >Friday 21 September 2018 08:18:25 -0400 (0:00:00.081) 0:01:47.565 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >PLAY [Host prep steps] ********************************************************* > >TASK [create persistent logs directory] **************************************** >Friday 21 September 2018 08:18:25 -0400 (0:00:00.070) 0:01:47.636 ****** >skipping: [compute-0] => (item=/var/log/containers/aodh) => {"changed": false, "item": "/var/log/containers/aodh", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "item": "/var/log/containers/httpd/aodh-api", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/aodh) => {"changed": false, "item": "/var/log/containers/aodh", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "item": "/var/log/containers/httpd/aodh-api", "skip_reason": "Conditional result was False"} >changed: [controller-0] => (item=/var/log/containers/aodh) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/aodh", "mode": "0755", "owner": "root", "path": "/var/log/containers/aodh", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >changed: [controller-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/aodh-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/aodh-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [aodh logs readme] ******************************************************** >Friday 21 September 2018 08:18:25 -0400 (0:00:00.418) 0:01:48.054 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "b6cf6dbe054f430c33d39c1a1a88593536d6e659", "msg": "Destination directory /var/log/aodh does not exist"} >...ignoring > >TASK [create persistent logs directory] **************************************** >Friday 21 September 2018 08:18:26 -0400 (0:00:00.518) 0:01:48.572 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/aodh", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [create persistent logs directory] **************************************** >Friday 21 September 2018 08:18:26 -0400 (0:00:00.238) 0:01:48.811 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/ceilometer", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [ceilometer logs readme] ************************************************** >Friday 21 September 2018 08:18:26 -0400 (0:00:00.232) 0:01:49.044 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3", "msg": "Destination directory /var/log/ceilometer does not exist"} >...ignoring > >TASK [create persistent logs directory] **************************************** >Friday 21 September 2018 08:18:27 -0400 (0:00:00.592) 0:01:49.636 ****** >skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "item": "/var/log/containers/httpd/cinder-api", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "item": "/var/log/containers/httpd/cinder-api", "skip_reason": "Conditional result was False"} >changed: [controller-0] => (item=/var/log/containers/cinder) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >changed: [controller-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/cinder-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/cinder-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [cinder logs readme] ****************************************************** >Friday 21 September 2018 08:18:27 -0400 (0:00:00.460) 0:01:50.096 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292", "msg": "Destination directory /var/log/cinder does not exist"} >...ignoring > >TASK [create persistent directories] ******************************************* >Friday 21 September 2018 08:18:28 -0400 (0:00:00.579) 0:01:50.676 ****** >skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >changed: [controller-0] => (item=/var/lib/cinder) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/cinder", "mode": "0755", "owner": "root", "path": "/var/lib/cinder", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [ensure ceph configurations exist] **************************************** >Friday 21 September 2018 08:18:28 -0400 (0:00:00.415) 0:01:51.091 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/ceph", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [create persistent directories] ******************************************* >Friday 21 September 2018 08:18:28 -0400 (0:00:00.229) 0:01:51.321 ****** >skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [create persistent directories] ******************************************* >Friday 21 September 2018 08:18:29 -0400 (0:00:00.236) 0:01:51.557 ****** >skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >ok: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/cinder", "mode": "0755", "owner": "root", "path": "/var/lib/cinder", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [cinder_enable_iscsi_backend fact] **************************************** >Friday 21 September 2018 08:18:29 -0400 (0:00:00.398) 0:01:51.956 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >ok: [controller-0] => {"ansible_facts": {"cinder_enable_iscsi_backend": false}, "changed": false} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [cinder create LVM volume group dd] *************************************** >Friday 21 September 2018 08:18:29 -0400 (0:00:00.098) 0:01:52.054 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [cinder create LVM volume group] ****************************************** >Friday 21 September 2018 08:18:29 -0400 (0:00:00.094) 0:01:52.149 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set_fact] **************************************************************** >Friday 21 September 2018 08:18:29 -0400 (0:00:00.091) 0:01:52.241 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >ok: [controller-0] => {"ansible_facts": {"container_registry_additional_sockets": ["/var/lib/openstack/docker.sock"], "container_registry_debug": true, "container_registry_deployment_user": "", "container_registry_docker_options": "--log-driver=journald --signature-verification=false --iptables=false --live-restore", "container_registry_insecure_registries": ["192.168.24.1:8787"], "container_registry_mirror": "", "container_registry_network_options": "--bip=172.31.0.1/24"}, "changed": false} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [include_role] ************************************************************ >Friday 21 September 2018 08:18:29 -0400 (0:00:00.091) 0:01:52.332 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [container-registry : enable net.ipv4.ip_forward] ************************* >Friday 21 September 2018 08:18:29 -0400 (0:00:00.146) 0:01:52.478 ****** >changed: [controller-0] => {"changed": true} > >TASK [container-registry : ensure docker is installed] ************************* >Friday 21 September 2018 08:18:30 -0400 (0:00:00.361) 0:01:52.840 ****** >ok: [controller-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-74.git6e3bb8e.el7.x86_64 providing docker is already installed"]} > >TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >Friday 21 September 2018 08:18:30 -0400 (0:00:00.578) 0:01:53.419 ****** >changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [container-registry : unset mountflags] *********************************** >Friday 21 September 2018 08:18:31 -0400 (0:00:00.226) 0:01:53.645 ****** >changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} > >TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >Friday 21 September 2018 08:18:31 -0400 (0:00:00.352) 0:01:53.998 ****** >changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} > >TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >Friday 21 September 2018 08:18:31 -0400 (0:00:00.245) 0:01:54.244 ****** >changed: [controller-0] => {"backup": "", "changed": true, "msg": "line added"} > >TASK [container-registry : Create additional socket directories] *************** >Friday 21 September 2018 08:18:32 -0400 (0:00:00.245) 0:01:54.490 ****** >changed: [controller-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [container-registry : manage /etc/docker/daemon.json] ********************* >Friday 21 September 2018 08:18:32 -0400 (0:00:00.259) 0:01:54.749 ****** >changed: [controller-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532312.31-216631250541067/source", "state": "file", "uid": 0} > >TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >Friday 21 September 2018 08:18:32 -0400 (0:00:00.576) 0:01:55.326 ****** >changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} > >TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >Friday 21 September 2018 08:18:33 -0400 (0:00:00.274) 0:01:55.601 ****** >changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} > >TASK [container-registry : ensure docker group exists] ************************* >Friday 21 September 2018 08:18:33 -0400 (0:00:00.239) 0:01:55.841 ****** >changed: [controller-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} > >TASK [container-registry : add deployment user to docker group] **************** >Friday 21 September 2018 08:18:33 -0400 (0:00:00.414) 0:01:56.256 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >RUNNING HANDLER [container-registry : restart docker] ************************** >Friday 21 September 2018 08:18:33 -0400 (0:00:00.022) 0:01:56.278 ****** >changed: [controller-0] => {"changed": true, "cmd": ["/bin/true"], "delta": "0:00:00.002415", "end": "2018-09-21 08:18:34.018630", "rc": 0, "start": "2018-09-21 08:18:34.016215", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} > >RUNNING HANDLER [container-registry : Docker | reload systemd] ***************** >Friday 21 September 2018 08:18:34 -0400 (0:00:00.240) 0:01:56.518 ****** >ok: [controller-0] => {"changed": false, "name": null, "status": {}} > >RUNNING HANDLER [container-registry : Docker | reload docker] ****************** >Friday 21 September 2018 08:18:34 -0400 (0:00:00.475) 0:01:56.994 ****** >changed: [controller-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "registries.service rhel-push-plugin.socket docker-storage-setup.service systemd-journald.socket basic.target system.slice network.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target paunch-container-shutdown.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127798", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "docker-cleanup.timer rhel-push-plugin.socket registries.service basic.target", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} > >RUNNING HANDLER [container-registry : Docker | pause while Docker restarts] **** >Friday 21 September 2018 08:18:36 -0400 (0:00:01.530) 0:01:58.525 ****** >Pausing for 10 seconds >(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) >[container-registry : Docker | pause while Docker restarts] >Waiting for docker restart: >ok: [controller-0] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2018-09-21 08:18:36.109262", "stderr": "", "stdout": "Paused for 10.0 seconds", "stop": "2018-09-21 08:18:46.109436", "user_input": ""} > >RUNNING HANDLER [container-registry : Docker | wait for docker] **************** >Friday 21 September 2018 08:18:46 -0400 (0:00:10.074) 0:02:08.599 ****** >changed: [controller-0] => {"attempts": 1, "changed": true, "cmd": ["/usr/bin/docker", "images"], "delta": "0:00:00.044655", "end": "2018-09-21 08:18:46.410371", "rc": 0, "start": "2018-09-21 08:18:46.365716", "stderr": "", "stderr_lines": [], "stdout": "REPOSITORY TAG IMAGE ID CREATED SIZE", "stdout_lines": ["REPOSITORY TAG IMAGE ID CREATED SIZE"]} > >TASK [container-registry : enable and start docker] **************************** >Friday 21 September 2018 08:18:46 -0400 (0:00:00.314) 0:02:08.913 ****** >changed: [controller-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Fri 2018-09-21 08:18:36 EDT", "ActiveEnterTimestampMonotonic": "355978961", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "registries.service rhel-push-plugin.socket docker-storage-setup.service systemd-journald.socket basic.target system.slice network.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-09-21 08:18:34 EDT", "AssertTimestampMonotonic": "354799737", "Before": "shutdown.target paunch-container-shutdown.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-09-21 08:18:34 EDT", "ConditionTimestampMonotonic": "354799737", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "14979", "ExecMainStartTimestamp": "Fri 2018-09-21 08:18:34 EDT", "ExecMainStartTimestampMonotonic": "354801380", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Fri 2018-09-21 08:18:34 EDT] ; stop_time=[n/a] ; pid=14979 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Fri 2018-09-21 08:18:34 EDT", "InactiveExitTimestampMonotonic": "354801436", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127798", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "14979", "MemoryAccounting": "no", "MemoryCurrent": "67338240", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "docker-cleanup.timer rhel-push-plugin.socket registries.service basic.target", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "26", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestamp": "Fri 2018-09-21 08:18:36 EDT", "WatchdogTimestampMonotonic": "355978904", "WatchdogUSec": "0"}} > > >TASK [create persistent logs directory] **************************************** >Friday 21 September 2018 08:18:46 -0400 (0:00:00.322) 0:02:09.236 ****** >skipping: [compute-0] => (item=/var/log/containers/glance) => {"changed": false, "item": "/var/log/containers/glance", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/glance) => {"changed": false, "item": "/var/log/containers/glance", "skip_reason": "Conditional result was False"} >changed: [controller-0] => (item=/var/log/containers/glance) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/glance", "mode": "0755", "owner": "root", "path": "/var/log/containers/glance", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [glance logs readme] ****************************************************** >Friday 21 September 2018 08:18:47 -0400 (0:00:00.251) 0:02:09.488 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "e368ae3272baeb19e1113009ea5dae00e797c919", "msg": "Destination directory /var/log/glance does not exist"} >...ignoring > >TASK [set_fact] **************************************************************** >Friday 21 September 2018 08:18:47 -0400 (0:00:00.514) 0:02:10.003 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [file] ******************************************************************** >Friday 21 September 2018 08:18:47 -0400 (0:00:00.093) 0:02:10.097 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [stat] ******************************************************************** >Friday 21 September 2018 08:18:47 -0400 (0:00:00.094) 0:02:10.191 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [copy] ******************************************************************** >Friday 21 September 2018 08:18:47 -0400 (0:00:00.092) 0:02:10.283 ****** >skipping: [controller-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} > >TASK [mount] ******************************************************************* >Friday 21 September 2018 08:18:47 -0400 (0:00:00.102) 0:02:10.386 ****** >skipping: [controller-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} > >TASK [Mount NFS on host] ******************************************************* >Friday 21 September 2018 08:18:48 -0400 (0:00:00.109) 0:02:10.495 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Mount Node Staging Location] ********************************************* >Friday 21 September 2018 08:18:48 -0400 (0:00:00.100) 0:02:10.596 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [create persistent logs directory] **************************************** >Friday 21 September 2018 08:18:48 -0400 (0:00:00.094) 0:02:10.690 ****** >skipping: [compute-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "item": "/var/log/containers/gnocchi", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "item": "/var/log/containers/httpd/gnocchi-api", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "item": "/var/log/containers/gnocchi", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "item": "/var/log/containers/httpd/gnocchi-api", "skip_reason": "Conditional result was False"} >changed: [controller-0] => (item=/var/log/containers/gnocchi) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/gnocchi", "mode": "0755", "owner": "root", "path": "/var/log/containers/gnocchi", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >changed: [controller-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/gnocchi-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/gnocchi-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [gnocchi logs readme] ***************************************************** >Friday 21 September 2018 08:18:48 -0400 (0:00:00.411) 0:02:11.102 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "2f6114e0f135d7222e70a07579ab0b2b6f967ff8", "msg": "Destination directory /var/log/gnocchi does not exist"} >...ignoring > >TASK [create persistent logs directory] **************************************** >Friday 21 September 2018 08:18:49 -0400 (0:00:00.537) 0:02:11.639 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/gnocchi", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [get parameters] ********************************************************** >Friday 21 September 2018 08:18:49 -0400 (0:00:00.247) 0:02:11.887 ****** >ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > >TASK [get DeployedSSLCertificatePath attributes] ******************************* >Friday 21 September 2018 08:18:49 -0400 (0:00:00.105) 0:02:11.992 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Assign bootstrap node] *************************************************** >Friday 21 September 2018 08:18:49 -0400 (0:00:00.100) 0:02:12.092 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set is_bootstrap_node fact] ********************************************** >Friday 21 September 2018 08:18:49 -0400 (0:00:00.098) 0:02:12.191 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [get haproxy status] ****************************************************** >Friday 21 September 2018 08:18:49 -0400 (0:00:00.099) 0:02:12.291 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [get pacemaker status] **************************************************** >Friday 21 September 2018 08:18:49 -0400 (0:00:00.097) 0:02:12.388 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [get docker status] ******************************************************* >Friday 21 September 2018 08:18:49 -0400 (0:00:00.098) 0:02:12.487 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [get container_id] ******************************************************** >Friday 21 September 2018 08:18:50 -0400 (0:00:00.102) 0:02:12.590 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [get pcs resource name for haproxy container] ***************************** >Friday 21 September 2018 08:18:50 -0400 (0:00:00.098) 0:02:12.688 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [remove DeployedSSLCertificatePath if is dir] ***************************** >Friday 21 September 2018 08:18:50 -0400 (0:00:00.099) 0:02:12.787 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [push certificate content] ************************************************ >Friday 21 September 2018 08:18:50 -0400 (0:00:00.108) 0:02:12.895 ****** >skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > >TASK [set certificate ownership] *********************************************** >Friday 21 September 2018 08:18:50 -0400 (0:00:00.106) 0:02:13.002 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [reload haproxy if enabled] *********************************************** >Friday 21 September 2018 08:18:50 -0400 (0:00:00.105) 0:02:13.107 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [restart pacemaker resource for haproxy] ********************************** >Friday 21 September 2018 08:18:50 -0400 (0:00:00.107) 0:02:13.215 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set kolla_dir fact] ****************************************************** >Friday 21 September 2018 08:18:50 -0400 (0:00:00.105) 0:02:13.320 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [assert {{ kolla_dir }}{{ cert_path }} exists] **************************** >Friday 21 September 2018 08:18:50 -0400 (0:00:00.103) 0:02:13.424 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set certificate group on host via container] ***************************** >Friday 21 September 2018 08:18:51 -0400 (0:00:00.099) 0:02:13.523 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [copy certificate from kolla directory to final location] ***************** >Friday 21 September 2018 08:18:51 -0400 (0:00:00.147) 0:02:13.670 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [send restart order to haproxy container] ********************************* >Friday 21 September 2018 08:18:51 -0400 (0:00:00.094) 0:02:13.765 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [create persistent directories] ******************************************* >Friday 21 September 2018 08:18:51 -0400 (0:00:00.090) 0:02:13.855 ****** >skipping: [compute-0] => (item=/var/lib/haproxy) => {"changed": false, "item": "/var/lib/haproxy", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/lib/haproxy) => {"changed": false, "item": "/var/lib/haproxy", "skip_reason": "Conditional result was False"} >ok: [controller-0] => (item=/var/lib/haproxy) => {"changed": false, "gid": 188, "group": "haproxy", "item": "/var/lib/haproxy", "mode": "0755", "owner": "haproxy", "path": "/var/lib/haproxy", "secontext": "system_u:object_r:haproxy_var_lib_t:s0", "size": 6, "state": "directory", "uid": 188} > >TASK [create persistent logs directory] **************************************** >Friday 21 September 2018 08:18:51 -0400 (0:00:00.244) 0:02:14.100 ****** >skipping: [compute-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "item": "/var/log/containers/httpd/heat-api", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "item": "/var/log/containers/httpd/heat-api", "skip_reason": "Conditional result was False"} >changed: [controller-0] => (item=/var/log/containers/heat) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/heat", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >changed: [controller-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/heat-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/heat-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [heat logs readme] ******************************************************** >Friday 21 September 2018 08:18:52 -0400 (0:00:00.428) 0:02:14.528 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "d30ca3bda176434d31659e7379616dd162ddb246", "msg": "Destination directory /var/log/heat does not exist"} >...ignoring > >TASK [create persistent logs directory] **************************************** >Friday 21 September 2018 08:18:52 -0400 (0:00:00.500) 0:02:15.029 ****** >skipping: [compute-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "item": "/var/log/containers/httpd/heat-api-cfn", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "item": "/var/log/containers/httpd/heat-api-cfn", "skip_reason": "Conditional result was False"} >ok: [controller-0] => (item=/var/log/containers/heat) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/heat", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >changed: [controller-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/heat-api-cfn", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/heat-api-cfn", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [create persistent logs directory] **************************************** >Friday 21 September 2018 08:18:52 -0400 (0:00:00.389) 0:02:15.418 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [create persistent logs directory] **************************************** >Friday 21 September 2018 08:18:53 -0400 (0:00:00.215) 0:02:15.633 ****** >skipping: [compute-0] => (item=/var/log/containers/horizon) => {"changed": false, "item": "/var/log/containers/horizon", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "item": "/var/log/containers/httpd/horizon", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/horizon) => {"changed": false, "item": "/var/log/containers/horizon", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "item": "/var/log/containers/httpd/horizon", "skip_reason": "Conditional result was False"} >changed: [controller-0] => (item=/var/log/containers/horizon) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/horizon", "mode": "0755", "owner": "root", "path": "/var/log/containers/horizon", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >changed: [controller-0] => (item=/var/log/containers/httpd/horizon) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/horizon", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/horizon", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [horizon logs readme] ***************************************************** >Friday 21 September 2018 08:18:53 -0400 (0:00:00.400) 0:02:16.034 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ac324739761cb36b925d6e309482e26f7fe49b91", "msg": "Destination directory /var/log/horizon does not exist"} >...ignoring > >TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >Friday 21 September 2018 08:18:54 -0400 (0:00:00.503) 0:02:16.537 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >ok: [controller-0] => {"changed": false, "stat": {"atime": 1537532314.4859447, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "424de87cd6ae66547b285288742255731a46ab83", "ctime": 1537376286.9259925, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 499913, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1513292517.0, "nlink": 1, "path": "/lib/systemd/system/iscsid.socket", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 175, "uid": 0, "version": "233990009", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} > >TASK [Stop and disable iscsid.socket service] ********************************** >Friday 21 September 2018 08:18:54 -0400 (0:00:00.228) 0:02:16.765 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [controller-0] => {"changed": true, "enabled": false, "name": "iscsid.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestamp": "Fri 2018-09-21 08:12:43 EDT", "ActiveEnterTimestampMonotonic": "3418384", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "sysinit.target -.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-09-21 08:12:43 EDT", "AssertTimestampMonotonic": "3418096", "Backlog": "128", "Before": "iscsid.service sockets.target shutdown.target", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-09-21 08:12:43 EDT", "ConditionTimestampMonotonic": "3418096", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Open-iSCSI iscsid Socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "Documentation": "man:iscsid(8) man:iscsiadm(8)", "FragmentPath": "/usr/lib/systemd/system/iscsid.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "iscsid.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Fri 2018-09-21 08:12:43 EDT", "InactiveExitTimestampMonotonic": "3418384", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "127798", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127798", "LimitSTACK": "18446744073709551615", "ListenStream": "@ISCSIADM_ABSTRACT_NAMESPACE", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "iscsid.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "Requires": "sysinit.target", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "listening", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "iscsid.service", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "sockets.target", "Wants": "-.slice"}} > >TASK [create persistent logs directory] **************************************** >Friday 21 September 2018 08:18:54 -0400 (0:00:00.312) 0:02:17.078 ****** >skipping: [compute-0] => (item=/var/log/containers/keystone) => {"changed": false, "item": "/var/log/containers/keystone", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "item": "/var/log/containers/httpd/keystone", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/keystone) => {"changed": false, "item": "/var/log/containers/keystone", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "item": "/var/log/containers/httpd/keystone", "skip_reason": "Conditional result was False"} >changed: [controller-0] => (item=/var/log/containers/keystone) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/keystone", "mode": "0755", "owner": "root", "path": "/var/log/containers/keystone", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >changed: [controller-0] => (item=/var/log/containers/httpd/keystone) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/keystone", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/keystone", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [keystone logs readme] **************************************************** >Friday 21 September 2018 08:18:55 -0400 (0:00:00.424) 0:02:17.502 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "910be882addb6df99267e9bd303f6d9bf658562e", "msg": "Destination directory /var/log/keystone does not exist"} >...ignoring > >TASK [memcached logs readme] *************************************************** >Friday 21 September 2018 08:18:55 -0400 (0:00:00.519) 0:02:18.022 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [controller-0] => {"changed": true, "checksum": "3b6f3952a077d2e5003df30c8c439478917cb6c4", "dest": "/var/log/memcached-readme.txt", "gid": 0, "group": "root", "md5sum": "ffdb1524e5789470856ae32ded4e2f80", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_log_t:s0", "size": 48, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532335.58-69079353862252/source", "state": "file", "uid": 0} > >TASK [create persistent directories] ******************************************* >Friday 21 September 2018 08:18:56 -0400 (0:00:00.557) 0:02:18.580 ****** >skipping: [compute-0] => (item=/var/log/containers/mysql) => {"changed": false, "item": "/var/log/containers/mysql", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/lib/mysql) => {"changed": false, "item": "/var/lib/mysql", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/mysql) => {"changed": false, "item": "/var/log/containers/mysql", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/lib/mysql) => {"changed": false, "item": "/var/lib/mysql", "skip_reason": "Conditional result was False"} >changed: [controller-0] => (item=/var/log/containers/mysql) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/mysql", "mode": "0755", "owner": "root", "path": "/var/log/containers/mysql", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >ok: [controller-0] => (item=/var/lib/mysql) => {"changed": false, "gid": 27, "group": "mysql", "item": "/var/lib/mysql", "mode": "0755", "owner": "mysql", "path": "/var/lib/mysql", "secontext": "system_u:object_r:mysqld_db_t:s0", "size": 6, "state": "directory", "uid": 27} > >TASK [mysql logs readme] ******************************************************* >Friday 21 September 2018 08:18:56 -0400 (0:00:00.396) 0:02:18.976 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [controller-0] => {"changed": true, "checksum": "de8fb5fe96200ab286121f8a09419702bd693743", "dest": "/var/log/mariadb/readme.txt", "gid": 0, "group": "root", "md5sum": "1f3e80eed7060dfe5ee49c8063244c53", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:mysqld_log_t:s0", "size": 78, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532336.53-249540474278850/source", "state": "file", "uid": 0} > >TASK [create persistent logs directory] **************************************** >Friday 21 September 2018 08:18:57 -0400 (0:00:00.524) 0:02:19.501 ****** >skipping: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "item": "/var/log/containers/httpd/neutron-api", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "item": "/var/log/containers/httpd/neutron-api", "skip_reason": "Conditional result was False"} >changed: [controller-0] => (item=/var/log/containers/neutron) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >changed: [controller-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/neutron-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/neutron-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [neutron logs readme] ***************************************************** >Friday 21 September 2018 08:18:57 -0400 (0:00:00.411) 0:02:19.913 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "f5a95f434a4aad25a9a81a045dec39159a6e8864", "msg": "Destination directory /var/log/neutron does not exist"} >...ignoring > >TASK [create persistent logs directory] **************************************** >Friday 21 September 2018 08:18:57 -0400 (0:00:00.503) 0:02:20.416 ****** >skipping: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >ok: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [create /var/lib/neutron] ************************************************* >Friday 21 September 2018 08:18:58 -0400 (0:00:00.235) 0:02:20.651 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/neutron", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [Copy in cleanup script] ************************************************** >Friday 21 September 2018 08:18:58 -0400 (0:00:00.243) 0:02:20.895 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [controller-0] => {"changed": true, "checksum": "659dc874a58142f127a275d34c6d90d27b3a4150", "dest": "/usr/libexec/neutron-cleanup", "gid": 0, "group": "root", "md5sum": "e5ee7754f01168fb9053e4dd66eff58c", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:bin_t:s0", "size": 675, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532338.45-199074515081719/source", "state": "file", "uid": 0} > >TASK [Copy in cleanup service] ************************************************* >Friday 21 September 2018 08:18:58 -0400 (0:00:00.556) 0:02:21.451 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [controller-0] => {"changed": true, "checksum": "1950d05f025c3db49014a49372fce15fa9014693", "dest": "/usr/lib/systemd/system/neutron-cleanup.service", "gid": 0, "group": "root", "md5sum": "0dd683a7d38da6dfb537927032db6f22", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:neutron_unit_file_t:s0", "size": 231, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532339.01-173460617753359/source", "state": "file", "uid": 0} > >TASK [Enabling the cleanup service] ******************************************** >Friday 21 September 2018 08:18:59 -0400 (0:00:00.546) 0:02:21.998 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [controller-0] => {"changed": true, "enabled": true, "name": "neutron-cleanup", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "basic.target system.slice openvswitch.service systemd-journald.socket network.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "docker.service shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "no", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Neutron cleanup on startup", "DevicePolicy": "auto", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/libexec/neutron-cleanup ; argv[]=/usr/libexec/neutron-cleanup ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/neutron-cleanup.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "neutron-cleanup.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "127798", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127798", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "neutron-cleanup.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "yes", "RemainAfterExit": "no", "Requires": "basic.target", "Restart": "no", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} > >TASK [create persistent logs directory] **************************************** >Friday 21 September 2018 08:18:59 -0400 (0:00:00.309) 0:02:22.307 ****** >skipping: [compute-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "item": "/var/log/containers/httpd/nova-api", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "item": "/var/log/containers/httpd/nova-api", "skip_reason": "Conditional result was False"} >changed: [controller-0] => (item=/var/log/containers/nova) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/nova", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >changed: [controller-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/nova-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/nova-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [nova logs readme] ******************************************************** >Friday 21 September 2018 08:19:00 -0400 (0:00:00.421) 0:02:22.729 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "c2216cc4edf5d3ce90f10748c3243db4e1842a85", "msg": "Destination directory /var/log/nova does not exist"} >...ignoring > >TASK [create persistent logs directory] **************************************** >Friday 21 September 2018 08:19:00 -0400 (0:00:00.585) 0:02:23.315 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [create persistent logs directory] **************************************** >Friday 21 September 2018 08:19:01 -0400 (0:00:00.293) 0:02:23.608 ****** >skipping: [compute-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "item": "/var/log/containers/httpd/nova-placement", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "item": "/var/log/containers/httpd/nova-placement", "skip_reason": "Conditional result was False"} >ok: [controller-0] => (item=/var/log/containers/nova) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/nova", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >changed: [controller-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/nova-placement", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/nova-placement", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [NTP settings] ************************************************************ >Friday 21 September 2018 08:19:01 -0400 (0:00:00.459) 0:02:24.068 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >ok: [controller-0] => {"ansible_facts": {"ntp_install_packages": false, "ntp_servers": ["10.35.255.6"]}, "changed": false} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Install ntpdate] ********************************************************* >Friday 21 September 2018 08:19:01 -0400 (0:00:00.098) 0:02:24.167 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Ensure system is NTP time synced] **************************************** >Friday 21 September 2018 08:19:01 -0400 (0:00:00.095) 0:02:24.263 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [controller-0] => {"changed": true, "cmd": ["ntpdate", "-u", "10.35.255.6"], "delta": "0:00:06.255843", "end": "2018-09-21 08:19:08.241052", "rc": 0, "start": "2018-09-21 08:19:01.985209", "stderr": "", "stderr_lines": [], "stdout": "21 Sep 08:19:08 ntpdate[16381]: adjust time server 10.35.255.6 offset -0.031690 sec", "stdout_lines": ["21 Sep 08:19:08 ntpdate[16381]: adjust time server 10.35.255.6 offset -0.031690 sec"]} > >TASK [create persistent logs directory] **************************************** >Friday 21 September 2018 08:19:08 -0400 (0:00:06.482) 0:02:30.745 ****** >skipping: [compute-0] => (item=/var/log/containers/panko) => {"changed": false, "item": "/var/log/containers/panko", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "item": "/var/log/containers/httpd/panko-api", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/panko) => {"changed": false, "item": "/var/log/containers/panko", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "item": "/var/log/containers/httpd/panko-api", "skip_reason": "Conditional result was False"} >changed: [controller-0] => (item=/var/log/containers/panko) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/panko", "mode": "0755", "owner": "root", "path": "/var/log/containers/panko", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >changed: [controller-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/panko-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/panko-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [panko logs readme] ******************************************************* >Friday 21 September 2018 08:19:08 -0400 (0:00:00.439) 0:02:31.185 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "903397bbd82e9b1f53087e3d7e8975d851857ce2", "msg": "Destination directory /var/log/panko does not exist"} >...ignoring > >TASK [create persistent directories] ******************************************* >Friday 21 September 2018 08:19:09 -0400 (0:00:00.521) 0:02:31.707 ****** >skipping: [compute-0] => (item=/var/lib/rabbitmq) => {"changed": false, "item": "/var/lib/rabbitmq", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "item": "/var/log/containers/rabbitmq", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/lib/rabbitmq) => {"changed": false, "item": "/var/lib/rabbitmq", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "item": "/var/log/containers/rabbitmq", "skip_reason": "Conditional result was False"} >changed: [controller-0] => (item=/var/lib/rabbitmq) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/rabbitmq", "mode": "0755", "owner": "root", "path": "/var/lib/rabbitmq", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >changed: [controller-0] => (item=/var/log/containers/rabbitmq) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/rabbitmq", "mode": "0755", "owner": "root", "path": "/var/log/containers/rabbitmq", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [rabbitmq logs readme] **************************************************** >Friday 21 September 2018 08:19:09 -0400 (0:00:00.420) 0:02:32.127 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ee241f2199f264c9d0f384cf389fe255e8bf8a77", "msg": "Destination directory /var/log/rabbitmq does not exist"} >...ignoring > >TASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] *** >Friday 21 September 2018 08:19:10 -0400 (0:00:00.525) 0:02:32.652 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [controller-0] => {"changed": true, "cmd": "echo 'export ERL_EPMD_ADDRESS=127.0.0.1' > /etc/rabbitmq/rabbitmq-env.conf\n echo 'export ERL_EPMD_PORT=4370' >> /etc/rabbitmq/rabbitmq-env.conf\n for pid in $(pgrep epmd --ns 1 --nslist pid); do kill $pid; done", "delta": "0:00:00.042662", "end": "2018-09-21 08:19:10.423991", "rc": 0, "start": "2018-09-21 08:19:10.381329", "stderr": "/bin/sh: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory\n/bin/sh: line 1: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory", "stderr_lines": ["/bin/sh: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory", "/bin/sh: line 1: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory"], "stdout": "", "stdout_lines": []} > >TASK [create persistent directories] ******************************************* >Friday 21 September 2018 08:19:10 -0400 (0:00:00.269) 0:02:32.922 ****** >skipping: [compute-0] => (item=/var/lib/redis) => {"changed": false, "item": "/var/lib/redis", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/log/containers/redis) => {"changed": false, "item": "/var/log/containers/redis", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/run/redis) => {"changed": false, "item": "/var/run/redis", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/lib/redis) => {"changed": false, "item": "/var/lib/redis", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/redis) => {"changed": false, "item": "/var/log/containers/redis", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/run/redis) => {"changed": false, "item": "/var/run/redis", "skip_reason": "Conditional result was False"} >ok: [controller-0] => (item=/var/lib/redis) => {"changed": false, "gid": 988, "group": "redis", "item": "/var/lib/redis", "mode": "0750", "owner": "redis", "path": "/var/lib/redis", "secontext": "system_u:object_r:redis_var_lib_t:s0", "size": 6, "state": "directory", "uid": 992} >changed: [controller-0] => (item=/var/log/containers/redis) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/redis", "mode": "0755", "owner": "root", "path": "/var/log/containers/redis", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >ok: [controller-0] => (item=/var/run/redis) => {"changed": false, "gid": 988, "group": "redis", "item": "/var/run/redis", "mode": "0755", "owner": "redis", "path": "/var/run/redis", "secontext": "system_u:object_r:redis_var_run_t:s0", "size": 40, "state": "directory", "uid": 992} > >TASK [redis logs readme] ******************************************************* >Friday 21 September 2018 08:19:11 -0400 (0:00:00.611) 0:02:33.533 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [controller-0] => {"changed": true, "checksum": "42d03af8abf93e87fdb3fc69702638fc81d943fb", "dest": "/var/log/redis/readme.txt", "gid": 0, "group": "root", "md5sum": "26fc3dbfb40d3414a608e987cc577748", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:redis_log_t:s0", "size": 78, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532351.09-87369388679837/source", "state": "file", "uid": 0} > >TASK [create /var/lib/sahara] ************************************************** >Friday 21 September 2018 08:19:11 -0400 (0:00:00.548) 0:02:34.081 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/sahara", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [create persistent sahara logs directory] ********************************* >Friday 21 September 2018 08:19:11 -0400 (0:00:00.223) 0:02:34.305 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/sahara", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [sahara logs readme] ****************************************************** >Friday 21 September 2018 08:19:12 -0400 (0:00:00.222) 0:02:34.528 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "b0212a1177fa4a88502d17a1cbc31198040cf047", "msg": "Destination directory /var/log/sahara does not exist"} >...ignoring > >TASK [create persistent directories] ******************************************* >Friday 21 September 2018 08:19:12 -0400 (0:00:00.502) 0:02:35.031 ****** >skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >changed: [controller-0] => (item=/srv/node) => {"changed": true, "gid": 0, "group": "root", "item": "/srv/node", "mode": "0755", "owner": "root", "path": "/srv/node", "secontext": "unconfined_u:object_r:var_t:s0", "size": 6, "state": "directory", "uid": 0} >changed: [controller-0] => (item=/var/log/swift) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/swift", "mode": "0755", "owner": "root", "path": "/var/log/swift", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [Create swift logging symlink] ******************************************** >Friday 21 September 2018 08:19:12 -0400 (0:00:00.390) 0:02:35.421 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [controller-0] => {"changed": true, "dest": "/var/log/containers/swift", "gid": 0, "group": "root", "mode": "0777", "owner": "root", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 14, "src": "/var/log/swift", "state": "link", "uid": 0} > >TASK [Check if rsyslog exists] ************************************************* >Friday 21 September 2018 08:19:13 -0400 (0:00:00.223) 0:02:35.645 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >ok: [controller-0] => {"changed": false, "stat": {"atime": 1537531965.8149447, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "ctime": 1537376267.7719924, "dev": 64514, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 6292389, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mimetype": "inode/directory", "mode": "0755", "mtime": 1537373083.395, "nlink": 2, "path": "/etc/rsyslog.d", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 50, "uid": 0, "version": "999240648", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true}} > >TASK [Forward logging to swift.log file] *************************************** >Friday 21 September 2018 08:19:13 -0400 (0:00:00.245) 0:02:35.890 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [controller-0] => {"changed": true, "checksum": "828097d22e649626706b267b5a61f05e49999586", "dest": "/etc/rsyslog.d/openstack-swift.conf", "gid": 0, "group": "root", "md5sum": "2118142de3156b2432c5c12816a4967c", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:syslog_conf_t:s0", "size": 138, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532353.45-106424827318671/source", "state": "file", "uid": 0} > >TASK [Restart rsyslogd service after logging conf change] ********************** >Friday 21 September 2018 08:19:13 -0400 (0:00:00.556) 0:02:36.447 ****** >[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using >`result|changed` instead use `result is changed`. This feature will be removed >in version 2.9. Deprecation warnings can be disabled by setting >deprecation_warnings=False in ansible.cfg. >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [controller-0] => {"changed": true, "name": "rsyslog", "state": "started", "status": {"ActiveEnterTimestamp": "Fri 2018-09-21 08:12:45 EDT", "ActiveEnterTimestampMonotonic": "5763910", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "network.target system.slice network-online.target basic.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-09-21 08:12:45 EDT", "AssertTimestampMonotonic": "5710845", "Before": "shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-09-21 08:12:45 EDT", "ConditionTimestampMonotonic": "5710841", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/rsyslog.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "System Logging Service", "DevicePolicy": "auto", "Documentation": "man:rsyslogd(8) http://www.rsyslog.com/doc/", "EnvironmentFile": "/etc/sysconfig/rsyslog (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "1721", "ExecMainStartTimestamp": "Fri 2018-09-21 08:12:45 EDT", "ExecMainStartTimestampMonotonic": "5712777", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/sbin/rsyslogd ; argv[]=/usr/sbin/rsyslogd -n $SYSLOGD_OPTIONS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/rsyslog.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "rsyslog.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Fri 2018-09-21 08:12:45 EDT", "InactiveExitTimestampMonotonic": "5712841", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "127798", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127798", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "1721", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "rsyslog.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "basic.target", "Restart": "on-failure", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0066", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "network.target system.slice network-online.target", "WatchdogTimestamp": "Fri 2018-09-21 08:12:45 EDT", "WatchdogTimestampMonotonic": "5763882", "WatchdogUSec": "0"}} > >TASK [create persistent directories] ******************************************* >Friday 21 September 2018 08:19:14 -0400 (0:00:00.305) 0:02:36.752 ****** >skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >ok: [controller-0] => (item=/srv/node) => {"changed": false, "gid": 0, "group": "root", "item": "/srv/node", "mode": "0755", "owner": "root", "path": "/srv/node", "secontext": "unconfined_u:object_r:var_t:s0", "size": 6, "state": "directory", "uid": 0} >ok: [controller-0] => (item=/var/log/swift) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/swift", "mode": "0755", "owner": "root", "path": "/var/log/swift", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >ok: [controller-0] => (item=/var/log/containers) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers", "mode": "0755", "owner": "root", "path": "/var/log/containers", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 244, "state": "directory", "uid": 0} > >TASK [Set swift_use_local_disks fact] ****************************************** >Friday 21 September 2018 08:19:14 -0400 (0:00:00.706) 0:02:37.458 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >ok: [controller-0] => {"ansible_facts": {"swift_use_local_disks": true}, "changed": false} > >TASK [Create Swift d1 directory if needed] ************************************* >Friday 21 September 2018 08:19:15 -0400 (0:00:00.129) 0:02:37.588 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/srv/node/d1", "secontext": "unconfined_u:object_r:var_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [swift logs readme] ******************************************************* >Friday 21 September 2018 08:19:15 -0400 (0:00:00.290) 0:02:37.878 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [controller-0] => {"changed": true, "checksum": "42510a6de124722d6efbc2b1bb038bfe97e5b6d3", "dest": "/var/log/swift/readme.txt", "gid": 0, "group": "root", "md5sum": "23163287d564762945ee1738f049dc10", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_log_t:s0", "size": 116, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532355.49-228333104164735/source", "state": "file", "uid": 0} > >TASK [Set fact for SwiftRawDisks] ********************************************** >Friday 21 September 2018 08:19:15 -0400 (0:00:00.605) 0:02:38.484 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >ok: [controller-0] => {"ansible_facts": {"swift_raw_disks": {}}, "changed": false} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Format SwiftRawDisks] **************************************************** >Friday 21 September 2018 08:19:16 -0400 (0:00:00.101) 0:02:38.585 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Mount devices defined in SwiftRawDisks] ********************************** >Friday 21 September 2018 08:19:16 -0400 (0:00:00.102) 0:02:38.688 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [create persistent logs directory] **************************************** >Friday 21 September 2018 08:19:16 -0400 (0:00:00.094) 0:02:38.782 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/ceilometer", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [ceilometer logs readme] ************************************************** >Friday 21 September 2018 08:19:16 -0400 (0:00:00.262) 0:02:39.045 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3", "msg": "Destination directory /var/log/ceilometer does not exist"} > >...ignoring > >TASK [create persistent logs directory] **************************************** >Friday 21 September 2018 08:19:17 -0400 (0:00:00.516) 0:02:39.561 ****** >skipping: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >changed: [compute-0] => (item=/var/log/containers/neutron) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [neutron logs readme] ***************************************************** >Friday 21 September 2018 08:19:17 -0400 (0:00:00.282) 0:02:39.843 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "f5a95f434a4aad25a9a81a045dec39159a6e8864", "msg": "Destination directory /var/log/neutron does not exist"} >...ignoring > >TASK [Copy in cleanup script] ************************************************** >Friday 21 September 2018 08:19:17 -0400 (0:00:00.513) 0:02:40.357 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [compute-0] => {"changed": true, "checksum": "659dc874a58142f127a275d34c6d90d27b3a4150", "dest": "/usr/libexec/neutron-cleanup", "gid": 0, "group": "root", "md5sum": "e5ee7754f01168fb9053e4dd66eff58c", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:bin_t:s0", "size": 675, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532357.94-212804000866499/source", "state": "file", "uid": 0} > >TASK [Copy in cleanup service] ************************************************* >Friday 21 September 2018 08:19:18 -0400 (0:00:00.577) 0:02:40.934 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [compute-0] => {"changed": true, "checksum": "1950d05f025c3db49014a49372fce15fa9014693", "dest": "/usr/lib/systemd/system/neutron-cleanup.service", "gid": 0, "group": "root", "md5sum": "0dd683a7d38da6dfb537927032db6f22", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:neutron_unit_file_t:s0", "size": 231, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532358.53-230814391284506/source", "state": "file", "uid": 0} > >TASK [Enabling the cleanup service] ******************************************** >Friday 21 September 2018 08:19:19 -0400 (0:00:00.614) 0:02:41.549 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [compute-0] => {"changed": true, "enabled": true, "name": "neutron-cleanup", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "systemd-journald.socket openvswitch.service network.target basic.target system.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "docker.service shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "no", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Neutron cleanup on startup", "DevicePolicy": "auto", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/libexec/neutron-cleanup ; argv[]=/usr/libexec/neutron-cleanup ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/neutron-cleanup.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "neutron-cleanup.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "22973", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22973", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "neutron-cleanup.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "yes", "RemainAfterExit": "no", "Requires": "basic.target", "Restart": "no", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} > >TASK [set_fact] **************************************************************** >Friday 21 September 2018 08:19:19 -0400 (0:00:00.362) 0:02:41.911 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >ok: [compute-0] => {"ansible_facts": {"container_registry_additional_sockets": ["/var/lib/openstack/docker.sock"], "container_registry_debug": true, "container_registry_deployment_user": "", "container_registry_docker_options": "--log-driver=journald --signature-verification=false --iptables=false --live-restore", "container_registry_insecure_registries": ["192.168.24.1:8787"], "container_registry_mirror": "", "container_registry_network_options": "--bip=172.31.0.1/24"}, "changed": false} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [include_role] ************************************************************ >Friday 21 September 2018 08:19:19 -0400 (0:00:00.109) 0:02:42.021 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [container-registry : enable net.ipv4.ip_forward] ************************* >Friday 21 September 2018 08:19:19 -0400 (0:00:00.130) 0:02:42.151 ****** >changed: [compute-0] => {"changed": true} > >TASK [container-registry : ensure docker is installed] ************************* >Friday 21 September 2018 08:19:19 -0400 (0:00:00.247) 0:02:42.399 ****** >ok: [compute-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-74.git6e3bb8e.el7.x86_64 providing docker is already installed"]} > >TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >Friday 21 September 2018 08:19:20 -0400 (0:00:00.549) 0:02:42.948 ****** >changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [container-registry : unset mountflags] *********************************** >Friday 21 September 2018 08:19:20 -0400 (0:00:00.233) 0:02:43.182 ****** >changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} > >TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >Friday 21 September 2018 08:19:20 -0400 (0:00:00.254) 0:02:43.436 ****** >changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} > >TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >Friday 21 September 2018 08:19:21 -0400 (0:00:00.257) 0:02:43.694 ****** >changed: [compute-0] => {"backup": "", "changed": true, "msg": "line added"} > >TASK [container-registry : Create additional socket directories] *************** >Friday 21 September 2018 08:19:21 -0400 (0:00:00.262) 0:02:43.956 ****** >changed: [compute-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [container-registry : manage /etc/docker/daemon.json] ********************* >Friday 21 September 2018 08:19:21 -0400 (0:00:00.267) 0:02:44.224 ****** >changed: [compute-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532361.78-134220020593857/source", "state": "file", "uid": 0} > >TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >Friday 21 September 2018 08:19:22 -0400 (0:00:00.591) 0:02:44.816 ****** >changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} > >TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >Friday 21 September 2018 08:19:22 -0400 (0:00:00.246) 0:02:45.062 ****** >changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} > >TASK [container-registry : ensure docker group exists] ************************* >Friday 21 September 2018 08:19:22 -0400 (0:00:00.243) 0:02:45.305 ****** >changed: [compute-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} > >TASK [container-registry : add deployment user to docker group] **************** >Friday 21 September 2018 08:19:23 -0400 (0:00:00.230) 0:02:45.536 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >RUNNING HANDLER [container-registry : restart docker] ************************** >Friday 21 September 2018 08:19:23 -0400 (0:00:00.024) 0:02:45.561 ****** >changed: [compute-0] => {"changed": true, "cmd": ["/bin/true"], "delta": "0:00:00.002218", "end": "2018-09-21 08:19:23.355580", "rc": 0, "start": "2018-09-21 08:19:23.353362", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} > >RUNNING HANDLER [container-registry : Docker | reload systemd] ***************** >Friday 21 September 2018 08:19:23 -0400 (0:00:00.311) 0:02:45.872 ****** >ok: [compute-0] => {"changed": false, "name": null, "status": {}} > >RUNNING HANDLER [container-registry : Docker | reload docker] ****************** >Friday 21 September 2018 08:19:23 -0400 (0:00:00.336) 0:02:46.209 ****** >changed: [compute-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "network.target system.slice rhel-push-plugin.socket registries.service neutron-cleanup.service docker-storage-setup.service basic.target systemd-journald.socket", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "paunch-container-shutdown.service shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22973", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "registries.service docker-cleanup.timer basic.target rhel-push-plugin.socket", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} > >RUNNING HANDLER [container-registry : Docker | pause while Docker restarts] **** >Friday 21 September 2018 08:19:25 -0400 (0:00:01.598) 0:02:47.807 ****** >Pausing for 10 seconds >(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) >[container-registry : Docker | pause while Docker restarts] >Waiting for docker restart: >ok: [compute-0] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2018-09-21 08:19:25.378484", "stderr": "", "stdout": "Paused for 10.0 seconds", "stop": "2018-09-21 08:19:35.378633", "user_input": ""} > >RUNNING HANDLER [container-registry : Docker | wait for docker] **************** >Friday 21 September 2018 08:19:35 -0400 (0:00:10.061) 0:02:57.869 ****** >changed: [compute-0] => {"attempts": 1, "changed": true, "cmd": ["/usr/bin/docker", "images"], "delta": "0:00:00.042096", "end": "2018-09-21 08:19:35.658721", "rc": 0, "start": "2018-09-21 08:19:35.616625", "stderr": "", "stderr_lines": [], "stdout": "REPOSITORY TAG IMAGE ID CREATED SIZE", "stdout_lines": ["REPOSITORY TAG IMAGE ID CREATED SIZE"]} > >TASK [container-registry : enable and start docker] **************************** >Friday 21 September 2018 08:19:35 -0400 (0:00:00.307) 0:02:58.177 ****** >changed: [compute-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Fri 2018-09-21 08:19:25 EDT", "ActiveEnterTimestampMonotonic": "393321876", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "network.target system.slice rhel-push-plugin.socket registries.service neutron-cleanup.service docker-storage-setup.service basic.target systemd-journald.socket", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-09-21 08:19:24 EDT", "AssertTimestampMonotonic": "392142763", "Before": "paunch-container-shutdown.service shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-09-21 08:19:24 EDT", "ConditionTimestampMonotonic": "392142762", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "14488", "ExecMainStartTimestamp": "Fri 2018-09-21 08:19:24 EDT", "ExecMainStartTimestampMonotonic": "392144769", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Fri 2018-09-21 08:19:24 EDT] ; stop_time=[n/a] ; pid=14488 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Fri 2018-09-21 08:19:24 EDT", "InactiveExitTimestampMonotonic": "392144809", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22973", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "14488", "MemoryAccounting": "no", "MemoryCurrent": "66969600", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "registries.service docker-cleanup.timer basic.target rhel-push-plugin.socket", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "20", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestamp": "Fri 2018-09-21 08:19:25 EDT", "WatchdogTimestampMonotonic": "393321773", "WatchdogUSec": "0"}} > >TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >Friday 21 September 2018 08:19:36 -0400 (0:00:00.339) 0:02:58.516 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >ok: [compute-0] => {"changed": false, "stat": {"atime": 1537532359.3542104, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "424de87cd6ae66547b285288742255731a46ab83", "ctime": 1537376286.9259925, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 499913, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1513292517.0, "nlink": 1, "path": "/lib/systemd/system/iscsid.socket", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 175, "uid": 0, "version": "233990009", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} > >TASK [Stop and disable iscsid.socket service] ********************************** >Friday 21 September 2018 08:19:36 -0400 (0:00:00.264) 0:02:58.780 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [compute-0] => {"changed": true, "enabled": false, "name": "iscsid.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestamp": "Fri 2018-09-21 08:12:55 EDT", "ActiveEnterTimestampMonotonic": "3740636", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "-.slice sysinit.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-09-21 08:12:55 EDT", "AssertTimestampMonotonic": "3740368", "Backlog": "128", "Before": "shutdown.target iscsid.service sockets.target", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-09-21 08:12:55 EDT", "ConditionTimestampMonotonic": "3740368", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Open-iSCSI iscsid Socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "Documentation": "man:iscsid(8) man:iscsiadm(8)", "FragmentPath": "/usr/lib/systemd/system/iscsid.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "iscsid.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Fri 2018-09-21 08:12:55 EDT", "InactiveExitTimestampMonotonic": "3740636", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "22973", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22973", "LimitSTACK": "18446744073709551615", "ListenStream": "@ISCSIADM_ABSTRACT_NAMESPACE", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "iscsid.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "Requires": "sysinit.target", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "listening", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "iscsid.service", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "sockets.target", "Wants": "-.slice"}} > >TASK [create persistent logs directory] **************************************** >Friday 21 September 2018 08:19:36 -0400 (0:00:00.350) 0:02:59.131 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [nova logs readme] ******************************************************** >Friday 21 September 2018 08:19:36 -0400 (0:00:00.248) 0:02:59.379 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "c2216cc4edf5d3ce90f10748c3243db4e1842a85", "msg": "Destination directory /var/log/nova does not exist"} >...ignoring > >TASK [Mount Nova NFS Share] **************************************************** >Friday 21 September 2018 08:19:37 -0400 (0:00:00.517) 0:02:59.896 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [create persistent directories] ******************************************* >Friday 21 September 2018 08:19:37 -0400 (0:00:00.095) 0:02:59.992 ****** >skipping: [controller-0] => (item=/var/lib/nova) => {"changed": false, "item": "/var/lib/nova", "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item=/var/lib/nova/instances) => {"changed": false, "item": "/var/lib/nova/instances", "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/lib/nova) => {"changed": false, "item": "/var/lib/nova", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/lib/nova/instances) => {"changed": false, "item": "/var/lib/nova/instances", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >changed: [compute-0] => (item=/var/lib/nova) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/nova", "mode": "0755", "owner": "root", "path": "/var/lib/nova", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >changed: [compute-0] => (item=/var/lib/nova/instances) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/nova/instances", "mode": "0755", "owner": "root", "path": "/var/lib/nova/instances", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >ok: [compute-0] => (item=/var/lib/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/libvirt", "mode": "0755", "owner": "root", "path": "/var/lib/libvirt", "secontext": "system_u:object_r:virt_var_lib_t:s0", "size": 104, "state": "directory", "uid": 0} > >TASK [ensure ceph configurations exist] **************************************** >Friday 21 September 2018 08:19:38 -0400 (0:00:00.594) 0:03:00.587 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/ceph", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [is Instance HA enabled] ************************************************** >Friday 21 September 2018 08:19:38 -0400 (0:00:00.257) 0:03:00.844 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >ok: [compute-0] => {"ansible_facts": {"instance_ha_enabled": false}, "changed": false} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [prepare Instance HA script directory] ************************************ >Friday 21 September 2018 08:19:38 -0400 (0:00:00.103) 0:03:00.948 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [install Instance HA script that runs nova-compute] *********************** >Friday 21 September 2018 08:19:38 -0400 (0:00:00.111) 0:03:01.059 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Get list of instance HA compute nodes] *********************************** >Friday 21 September 2018 08:19:38 -0400 (0:00:00.102) 0:03:01.162 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [If instance HA is enabled on the node activate the evacuation completed check] *** >Friday 21 September 2018 08:19:38 -0400 (0:00:00.111) 0:03:01.274 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [create libvirt persistent data directories] ****************************** >Friday 21 September 2018 08:19:38 -0400 (0:00:00.104) 0:03:01.378 ****** >skipping: [controller-0] => (item=/etc/libvirt) => {"changed": false, "item": "/etc/libvirt", "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item=/etc/libvirt/secrets) => {"changed": false, "item": "/etc/libvirt/secrets", "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item=/etc/libvirt/qemu) => {"changed": false, "item": "/etc/libvirt/qemu", "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item=/var/log/containers/libvirt) => {"changed": false, "item": "/var/log/containers/libvirt", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/etc/libvirt) => {"changed": false, "item": "/etc/libvirt", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/etc/libvirt/secrets) => {"changed": false, "item": "/etc/libvirt/secrets", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/etc/libvirt/qemu) => {"changed": false, "item": "/etc/libvirt/qemu", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/libvirt) => {"changed": false, "item": "/var/log/containers/libvirt", "skip_reason": "Conditional result was False"} >ok: [compute-0] => (item=/etc/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt", "mode": "0700", "owner": "root", "path": "/etc/libvirt", "secontext": "system_u:object_r:virt_etc_t:s0", "size": 215, "state": "directory", "uid": 0} >ok: [compute-0] => (item=/etc/libvirt/secrets) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt/secrets", "mode": "0700", "owner": "root", "path": "/etc/libvirt/secrets", "secontext": "system_u:object_r:virt_etc_rw_t:s0", "size": 6, "state": "directory", "uid": 0} >ok: [compute-0] => (item=/etc/libvirt/qemu) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt/qemu", "mode": "0700", "owner": "root", "path": "/etc/libvirt/qemu", "secontext": "system_u:object_r:virt_etc_rw_t:s0", "size": 22, "state": "directory", "uid": 0} >ok: [compute-0] => (item=/var/lib/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/libvirt", "mode": "0755", "owner": "root", "path": "/var/lib/libvirt", "secontext": "system_u:object_r:virt_var_lib_t:s0", "size": 104, "state": "directory", "uid": 0} >changed: [compute-0] => (item=/var/log/containers/libvirt) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/libvirt", "mode": "0755", "owner": "root", "path": "/var/log/containers/libvirt", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [ensure qemu group is present on the host] ******************************** >Friday 21 September 2018 08:19:39 -0400 (0:00:01.068) 0:03:02.447 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >ok: [compute-0] => {"changed": false, "gid": 107, "name": "qemu", "state": "present", "system": false} > >TASK [ensure qemu user is present on the host] ********************************* >Friday 21 September 2018 08:19:40 -0400 (0:00:00.263) 0:03:02.710 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >ok: [compute-0] => {"append": false, "changed": false, "comment": "qemu user", "group": 107, "home": "/", "move_home": false, "name": "qemu", "shell": "/sbin/nologin", "state": "present", "uid": 107} > >TASK [create directory for vhost-user sockets with qemu ownership] ************* >Friday 21 September 2018 08:19:40 -0400 (0:00:00.521) 0:03:03.231 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [compute-0] => {"changed": true, "gid": 107, "group": "qemu", "mode": "0755", "owner": "qemu", "path": "/var/lib/vhost_sockets", "secontext": "system_u:object_r:virt_cache_t:s0", "size": 6, "state": "directory", "uid": 107} > >TASK [check if libvirt is installed] ******************************************* >Friday 21 September 2018 08:19:41 -0400 (0:00:00.336) 0:03:03.568 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > [WARNING]: Consider using the yum, dnf or zypper module rather than running >rpm. If you need to use command because yum, dnf or zypper is insufficient you >can add warn=False to this command task or set command_warnings=False in >ansible.cfg to get rid of this message. >changed: [compute-0] => {"changed": true, "cmd": ["/usr/bin/rpm", "-q", "libvirt-daemon"], "delta": "0:00:00.043879", "end": "2018-09-21 08:19:41.414895", "failed_when_result": false, "rc": 0, "start": "2018-09-21 08:19:41.371016", "stderr": "", "stderr_lines": [], "stdout": "libvirt-daemon-3.9.0-14.el7_5.7.x86_64", "stdout_lines": ["libvirt-daemon-3.9.0-14.el7_5.7.x86_64"]} > >TASK [make sure libvirt services are disabled] ********************************* >Friday 21 September 2018 08:19:41 -0400 (0:00:00.402) 0:03:03.971 ****** >skipping: [controller-0] => (item=libvirtd.service) => {"changed": false, "item": "libvirtd.service", "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item=virtlogd.socket) => {"changed": false, "item": "virtlogd.socket", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=libvirtd.service) => {"changed": false, "item": "libvirtd.service", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=virtlogd.socket) => {"changed": false, "item": "virtlogd.socket", "skip_reason": "Conditional result was False"} >changed: [compute-0] => (item=libvirtd.service) => {"changed": true, "enabled": false, "item": "libvirtd.service", "name": "libvirtd.service", "state": "stopped", "status": {"ActiveEnterTimestamp": "Fri 2018-09-21 08:12:57 EDT", "ActiveEnterTimestampMonotonic": "5580573", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "iscsid.service systemd-journald.socket virtlogd.service local-fs.target basic.target virtlogd.socket remote-fs.target virtlockd.service apparmor.service system.slice network.target virtlockd.socket dbus.service", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-09-21 08:12:57 EDT", "AssertTimestampMonotonic": "5325121", "Before": "libvirt-guests.service multi-user.target shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-09-21 08:12:57 EDT", "ConditionTimestampMonotonic": "5325121", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/libvirtd.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Virtualization daemon", "DevicePolicy": "auto", "Documentation": "man:libvirtd(8) https://libvirt.org", "EnvironmentFile": "/etc/sysconfig/libvirtd (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "1165", "ExecMainStartTimestamp": "Fri 2018-09-21 08:12:57 EDT", "ExecMainStartTimestampMonotonic": "5326306", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/libvirtd ; argv[]=/usr/sbin/libvirtd $LIBVIRTD_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/libvirtd.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "libvirtd.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Fri 2018-09-21 08:12:57 EDT", "InactiveExitTimestampMonotonic": "5326351", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "8192", "LimitNPROC": "22973", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22973", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "1165", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "libvirtd.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "basic.target virtlockd.socket virtlogd.socket", "Restart": "on-failure", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "32768", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target libvirt-guests.service", "Wants": "system.slice", "WatchdogTimestamp": "Fri 2018-09-21 08:12:57 EDT", "WatchdogTimestampMonotonic": "5580520", "WatchdogUSec": "0"}} >changed: [compute-0] => (item=virtlogd.socket) => {"changed": true, "enabled": false, "item": "virtlogd.socket", "name": "virtlogd.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestamp": "Fri 2018-09-21 08:12:55 EDT", "ActiveEnterTimestampMonotonic": "3738240", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "-.mount sysinit.target -.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-09-21 08:12:55 EDT", "AssertTimestampMonotonic": "3736912", "Backlog": "128", "Before": "libvirtd.service virtlogd.service shutdown.target sockets.target", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-09-21 08:12:55 EDT", "ConditionTimestampMonotonic": "3736909", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Virtual machine log manager socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "FragmentPath": "/usr/lib/systemd/system/virtlogd.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "virtlogd.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Fri 2018-09-21 08:12:55 EDT", "InactiveExitTimestampMonotonic": "3738240", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "22973", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22973", "LimitSTACK": "18446744073709551615", "ListenStream": "/var/run/libvirt/virtlogd-sock", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "virtlogd.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "RequiredBy": "libvirtd.service virtlogd.service", "Requires": "-.mount sysinit.target", "RequiresMountsFor": "/var/run/libvirt/virtlogd-sock", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "listening", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "virtlogd.service", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "disabled", "Wants": "-.slice"}} > >TASK [NTP settings] ************************************************************ >Friday 21 September 2018 08:19:42 -0400 (0:00:00.580) 0:03:04.552 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >ok: [compute-0] => {"ansible_facts": {"ntp_install_packages": false, "ntp_servers": ["10.35.255.6"]}, "changed": false} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Install ntpdate] ********************************************************* >Friday 21 September 2018 08:19:42 -0400 (0:00:00.115) 0:03:04.667 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Ensure system is NTP time synced] **************************************** >Friday 21 September 2018 08:19:42 -0400 (0:00:00.115) 0:03:04.783 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [compute-0] => {"changed": true, "cmd": ["ntpdate", "-u", "10.35.255.6"], "delta": "0:00:06.256202", "end": "2018-09-21 08:19:48.781114", "rc": 0, "start": "2018-09-21 08:19:42.524912", "stderr": "", "stderr_lines": [], "stdout": "21 Sep 08:19:48 ntpdate[14994]: adjust time server 10.35.255.6 offset -0.015278 sec", "stdout_lines": ["21 Sep 08:19:48 ntpdate[14994]: adjust time server 10.35.255.6 offset -0.015278 sec"]} > > >TASK [create persistent directories] ******************************************* >Friday 21 September 2018 08:19:48 -0400 (0:00:06.517) 0:03:11.300 ****** >skipping: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} > >TASK [cinder logs readme] ****************************************************** >Friday 21 September 2018 08:19:48 -0400 (0:00:00.137) 0:03:11.438 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [ensure ceph configurations exist] **************************************** >Friday 21 September 2018 08:19:49 -0400 (0:00:00.095) 0:03:11.533 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [cinder_enable_iscsi_backend fact] **************************************** >Friday 21 September 2018 08:19:49 -0400 (0:00:00.102) 0:03:11.636 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [cinder create LVM volume group dd] *************************************** >Friday 21 September 2018 08:19:49 -0400 (0:00:00.097) 0:03:11.734 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [cinder create LVM volume group] ****************************************** >Friday 21 September 2018 08:19:49 -0400 (0:00:00.098) 0:03:11.832 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set_fact] **************************************************************** >Friday 21 September 2018 08:19:49 -0400 (0:00:00.100) 0:03:11.933 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [include_role] ************************************************************ >Friday 21 September 2018 08:19:49 -0400 (0:00:00.110) 0:03:12.043 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >Friday 21 September 2018 08:19:49 -0400 (0:00:00.098) 0:03:12.142 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Stop and disable iscsid.socket service] ********************************** >Friday 21 September 2018 08:19:49 -0400 (0:00:00.098) 0:03:12.241 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [NTP settings] ************************************************************ >Friday 21 September 2018 08:19:49 -0400 (0:00:00.098) 0:03:12.340 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Install ntpdate] ********************************************************* >Friday 21 September 2018 08:19:49 -0400 (0:00:00.097) 0:03:12.438 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Ensure system is NTP time synced] **************************************** >Friday 21 September 2018 08:19:50 -0400 (0:00:00.095) 0:03:12.533 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set_fact] **************************************************************** >Friday 21 September 2018 08:19:50 -0400 (0:00:00.099) 0:03:12.633 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [include_role] ************************************************************ >Friday 21 September 2018 08:19:50 -0400 (0:00:00.101) 0:03:12.735 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [NTP settings] ************************************************************ >Friday 21 September 2018 08:19:50 -0400 (0:00:00.097) 0:03:12.832 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Install ntpdate] ********************************************************* >Friday 21 September 2018 08:19:50 -0400 (0:00:00.098) 0:03:12.931 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Ensure system is NTP time synced] **************************************** >Friday 21 September 2018 08:19:50 -0400 (0:00:00.113) 0:03:13.044 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [create persistent directories] ******************************************* >Friday 21 September 2018 08:19:50 -0400 (0:00:00.103) 0:03:13.148 ****** >skipping: [controller-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} > >TASK [Set swift_use_local_disks fact] ****************************************** >Friday 21 September 2018 08:19:50 -0400 (0:00:00.129) 0:03:13.277 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Create Swift d1 directory if needed] ************************************* >Friday 21 September 2018 08:19:50 -0400 (0:00:00.108) 0:03:13.386 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Create swift logging symlink] ******************************************** >Friday 21 September 2018 08:19:51 -0400 (0:00:00.108) 0:03:13.494 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [swift logs readme] ******************************************************* >Friday 21 September 2018 08:19:51 -0400 (0:00:00.104) 0:03:13.598 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Check if rsyslog exists] ************************************************* >Friday 21 September 2018 08:19:51 -0400 (0:00:00.101) 0:03:13.699 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Forward logging to swift.log file] *************************************** >Friday 21 September 2018 08:19:51 -0400 (0:00:00.162) 0:03:13.862 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Restart rsyslogd service after logging conf change] ********************** >Friday 21 September 2018 08:19:51 -0400 (0:00:00.103) 0:03:13.965 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Set fact for SwiftRawDisks] ********************************************** >Friday 21 September 2018 08:19:51 -0400 (0:00:00.103) 0:03:14.069 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Format SwiftRawDisks] **************************************************** >Friday 21 September 2018 08:19:51 -0400 (0:00:00.102) 0:03:14.171 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Mount devices defined in SwiftRawDisks] ********************************** >Friday 21 September 2018 08:19:51 -0400 (0:00:00.104) 0:03:14.276 ****** >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set_fact] **************************************************************** >Friday 21 September 2018 08:19:51 -0400 (0:00:00.100) 0:03:14.376 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >ok: [ceph-0] => {"ansible_facts": {"container_registry_additional_sockets": ["/var/lib/openstack/docker.sock"], "container_registry_debug": true, "container_registry_deployment_user": "", "container_registry_docker_options": "--log-driver=journald --signature-verification=false --iptables=false --live-restore", "container_registry_insecure_registries": ["192.168.24.1:8787"], "container_registry_mirror": "", "container_registry_network_options": "--bip=172.31.0.1/24"}, "changed": false} > >TASK [include_role] ************************************************************ >Friday 21 September 2018 08:19:52 -0400 (0:00:00.134) 0:03:14.511 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [container-registry : enable net.ipv4.ip_forward] ************************* >Friday 21 September 2018 08:19:52 -0400 (0:00:00.129) 0:03:14.641 ****** >changed: [ceph-0] => {"changed": true} > >TASK [container-registry : ensure docker is installed] ************************* >Friday 21 September 2018 08:19:52 -0400 (0:00:00.237) 0:03:14.878 ****** >ok: [ceph-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-74.git6e3bb8e.el7.x86_64 providing docker is already installed"]} > >TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >Friday 21 September 2018 08:19:52 -0400 (0:00:00.513) 0:03:15.392 ****** >changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [container-registry : unset mountflags] *********************************** >Friday 21 September 2018 08:19:53 -0400 (0:00:00.222) 0:03:15.615 ****** >changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} > >TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >Friday 21 September 2018 08:19:53 -0400 (0:00:00.271) 0:03:15.886 ****** >changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} > >TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >Friday 21 September 2018 08:19:53 -0400 (0:00:00.287) 0:03:16.174 ****** >changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line added"} > >TASK [container-registry : Create additional socket directories] *************** >Friday 21 September 2018 08:19:53 -0400 (0:00:00.280) 0:03:16.454 ****** >changed: [ceph-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [container-registry : manage /etc/docker/daemon.json] ********************* >Friday 21 September 2018 08:19:54 -0400 (0:00:00.283) 0:03:16.738 ****** >changed: [ceph-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532394.3-82297772415282/source", "state": "file", "uid": 0} > >TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >Friday 21 September 2018 08:19:54 -0400 (0:00:00.600) 0:03:17.338 ****** >changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} > >TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >Friday 21 September 2018 08:19:55 -0400 (0:00:00.257) 0:03:17.596 ****** >changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} > >TASK [container-registry : ensure docker group exists] ************************* >Friday 21 September 2018 08:19:55 -0400 (0:00:00.252) 0:03:17.849 ****** >changed: [ceph-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} > >TASK [container-registry : add deployment user to docker group] **************** >Friday 21 September 2018 08:19:55 -0400 (0:00:00.245) 0:03:18.094 ****** >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >RUNNING HANDLER [container-registry : restart docker] ************************** >Friday 21 September 2018 08:19:55 -0400 (0:00:00.025) 0:03:18.120 ****** >changed: [ceph-0] => {"changed": true, "cmd": ["/bin/true"], "delta": "0:00:00.002297", "end": "2018-09-21 08:19:54.794664", "rc": 0, "start": "2018-09-21 08:19:54.792367", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} > >RUNNING HANDLER [container-registry : Docker | reload systemd] ***************** >Friday 21 September 2018 08:19:55 -0400 (0:00:00.249) 0:03:18.370 ****** >ok: [ceph-0] => {"changed": false, "name": null, "status": {}} > >RUNNING HANDLER [container-registry : Docker | reload docker] ****************** >Friday 21 September 2018 08:19:56 -0400 (0:00:00.286) 0:03:18.657 ****** >changed: [ceph-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "rhel-push-plugin.socket registries.service system.slice basic.target network.target systemd-journald.socket docker-storage-setup.service", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "paunch-container-shutdown.service shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22973", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "basic.target docker-cleanup.timer registries.service rhel-push-plugin.socket", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} > >RUNNING HANDLER [container-registry : Docker | pause while Docker restarts] **** >Friday 21 September 2018 08:19:57 -0400 (0:00:01.587) 0:03:20.244 ****** >Pausing for 10 seconds >(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) >[container-registry : Docker | pause while Docker restarts] >Waiting for docker restart: >ok: [ceph-0] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2018-09-21 08:19:57.872231", "stderr": "", "stdout": "Paused for 10.0 seconds", "stop": "2018-09-21 08:20:07.872377", "user_input": ""} > >RUNNING HANDLER [container-registry : Docker | wait for docker] **************** >Friday 21 September 2018 08:20:07 -0400 (0:00:10.118) 0:03:30.362 ****** >changed: [ceph-0] => {"attempts": 1, "changed": true, "cmd": ["/usr/bin/docker", "images"], "delta": "0:00:00.032000", "end": "2018-09-21 08:20:07.071844", "rc": 0, "start": "2018-09-21 08:20:07.039844", "stderr": "", "stderr_lines": [], "stdout": "REPOSITORY TAG IMAGE ID CREATED SIZE", "stdout_lines": ["REPOSITORY TAG IMAGE ID CREATED SIZE"]} > >TASK [container-registry : enable and start docker] **************************** >Friday 21 September 2018 08:20:08 -0400 (0:00:00.279) 0:03:30.642 ****** >changed: [ceph-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Fri 2018-09-21 08:19:56 EDT", "ActiveEnterTimestampMonotonic": "430766927", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "rhel-push-plugin.socket registries.service system.slice basic.target network.target systemd-journald.socket docker-storage-setup.service", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-09-21 08:19:55 EDT", "AssertTimestampMonotonic": "429537729", "Before": "paunch-container-shutdown.service shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-09-21 08:19:55 EDT", "ConditionTimestampMonotonic": "429537729", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "13718", "ExecMainStartTimestamp": "Fri 2018-09-21 08:19:55 EDT", "ExecMainStartTimestampMonotonic": "429539349", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Fri 2018-09-21 08:19:55 EDT] ; stop_time=[n/a] ; pid=13718 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Fri 2018-09-21 08:19:55 EDT", "InactiveExitTimestampMonotonic": "429539384", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22973", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "13718", "MemoryAccounting": "no", "MemoryCurrent": "62119936", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "basic.target docker-cleanup.timer registries.service rhel-push-plugin.socket", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "17", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestamp": "Fri 2018-09-21 08:19:56 EDT", "WatchdogTimestampMonotonic": "430766808", "WatchdogUSec": "0"}} > >TASK [NTP settings] ************************************************************ >Friday 21 September 2018 08:20:08 -0400 (0:00:00.327) 0:03:30.969 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >ok: [ceph-0] => {"ansible_facts": {"ntp_install_packages": false, "ntp_servers": ["10.35.255.6"]}, "changed": false} > >TASK [Install ntpdate] ********************************************************* >Friday 21 September 2018 08:20:08 -0400 (0:00:00.135) 0:03:31.104 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Ensure system is NTP time synced] **************************************** >Friday 21 September 2018 08:20:08 -0400 (0:00:00.105) 0:03:31.209 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >changed: [ceph-0] => {"changed": true, "cmd": ["ntpdate", "-u", "10.35.255.6"], "delta": "0:00:07.295008", "end": "2018-09-21 08:20:15.216233", "rc": 0, "start": "2018-09-21 08:20:07.921225", "stderr": "", "stderr_lines": [], "stdout": "21 Sep 08:20:15 ntpdate[13845]: step time server 10.35.255.6 offset 1.039994 sec", "stdout_lines": ["21 Sep 08:20:15 ntpdate[13845]: step time server 10.35.255.6 offset 1.039994 sec"]} > >PLAY [External deployment step 1] ********************************************** > >TASK [set blacklisted_hostnames] *********************************************** >Friday 21 September 2018 08:20:15 -0400 (0:00:06.536) 0:03:37.745 ****** >ok: [undercloud] => {"ansible_facts": {"blacklisted_hostnames": []}, "changed": false} > >TASK [create ceph-ansible temp dirs] ******************************************* >Friday 21 September 2018 08:20:15 -0400 (0:00:00.048) 0:03:37.794 ****** >changed: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/group_vars) => {"changed": true, "gid": 42430, "group": "mistral", "item": "/var/lib/mistral/overcloud/ceph-ansible/group_vars", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/overcloud/ceph-ansible/group_vars", "size": 6, "state": "directory", "uid": 42430} >changed: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/host_vars) => {"changed": true, "gid": 42430, "group": "mistral", "item": "/var/lib/mistral/overcloud/ceph-ansible/host_vars", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/overcloud/ceph-ansible/host_vars", "size": 6, "state": "directory", "uid": 42430} >changed: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/fetch_dir) => {"changed": true, "gid": 42430, "group": "mistral", "item": "/var/lib/mistral/overcloud/ceph-ansible/fetch_dir", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/overcloud/ceph-ansible/fetch_dir", "size": 6, "state": "directory", "uid": 42430} > >TASK [generate inventory] ****************************************************** >Friday 21 September 2018 08:20:15 -0400 (0:00:00.488) 0:03:38.283 ****** >changed: [undercloud] => {"changed": true, "checksum": "70f574ea2fb21396bf6ac81090ca97f559b9104a", "dest": "/var/lib/mistral/overcloud/ceph-ansible/inventory.yml", "gid": 42430, "group": "mistral", "md5sum": "79f29936418de452deeff7da1c6e53ef", "mode": "0644", "owner": "mistral", "size": 525, "src": "/tmp/ansible-/ansible-tmp-1537532416.13-129250107558748/source", "state": "file", "uid": 42430} > >TASK [set ceph-ansible group vars all] ***************************************** >Friday 21 September 2018 08:20:16 -0400 (0:00:00.636) 0:03:38.920 ****** >ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_all": {"ceph_conf_overrides": {"global": {"osd_pool_default_pg_num": 32, "osd_pool_default_pgp_num": 32, "osd_pool_default_size": 1, "rgw_keystone_accepted_roles": "Member, admin", "rgw_keystone_admin_domain": "default", "rgw_keystone_admin_password": "eL7oG66XSFvB8ztoFshxMfuZo", "rgw_keystone_admin_project": "service", "rgw_keystone_admin_user": "swift", "rgw_keystone_api_version": 3, "rgw_keystone_implicit_tenants": "true", "rgw_keystone_revocation_interval": "0", "rgw_keystone_url": "http://172.17.1.15:5000", "rgw_s3_auth_use_keystone": "true"}}, "ceph_docker_image": "rhceph", "ceph_docker_image_tag": "3-12", "ceph_docker_registry": "192.168.24.1:8787", "ceph_origin": "distro", "ceph_stable": true, "cluster": "ceph", "cluster_network": "172.17.4.0/24", "containerized_deployment": true, "docker": true, "fsid": "8fedf068-bd95-11e8-ba69-5254006eda59", "generate_fsid": false, "ip_version": "ipv4", "keys": [{"caps": {"mgr": "allow *", "mon": "profile rbd", "osd": "profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics"}, "key": "AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==", "mode": "0600", "name": "client.openstack"}, {"caps": {"mds": "allow *", "mgr": "allow *", "mon": "allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'", "osd": "allow rw"}, "key": "AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==", "mode": "0600", "name": "client.manila"}, {"caps": {"mgr": "allow *", "mon": "allow rw", "osd": "allow rwx"}, "key": "AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==", "mode": "0600", "name": "client.radosgw"}], "monitor_address_block": "172.17.3.0/24", "ntp_service_enabled": false, "openstack_config": true, "openstack_keys": [{"caps": {"mgr": "allow *", "mon": "profile rbd", "osd": "profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics"}, "key": "AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==", "mode": "0600", "name": "client.openstack"}, {"caps": {"mds": "allow *", "mgr": "allow *", "mon": "allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'", "osd": "allow rw"}, "key": "AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==", "mode": "0600", "name": "client.manila"}, {"caps": {"mgr": "allow *", "mon": "allow rw", "osd": "allow rwx"}, "key": "AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==", "mode": "0600", "name": "client.radosgw"}], "openstack_pools": [{"application": "rbd", "name": "images", "pg_num": 32, "rule_name": "replicated_rule"}, {"application": "openstack_gnocchi", "name": "metrics", "pg_num": 32, "rule_name": "replicated_rule"}, {"application": "rbd", "name": "backups", "pg_num": 32, "rule_name": "replicated_rule"}, {"application": "rbd", "name": "vms", "pg_num": 32, "rule_name": "replicated_rule"}, {"application": "rbd", "name": "volumes", "pg_num": 32, "rule_name": "replicated_rule"}], "pools": [], "public_network": "172.17.3.0/24", "user_config": true}}, "changed": false} > >TASK [generate ceph-ansible group vars all] ************************************ >Friday 21 September 2018 08:20:16 -0400 (0:00:00.063) 0:03:38.984 ****** >changed: [undercloud] => {"changed": true, "checksum": "7a73a0fb51019e536cdd2bc08f21576ca82d8101", "dest": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/all.yml", "gid": 42430, "group": "mistral", "md5sum": "3b6d97838cd9d6feac45e879e792ab65", "mode": "0644", "owner": "mistral", "size": 3078, "src": "/tmp/ansible-/ansible-tmp-1537532416.55-118232899664942/source", "state": "file", "uid": 42430} > >TASK [set ceph-ansible extra vars] ********************************************* >Friday 21 September 2018 08:20:16 -0400 (0:00:00.351) 0:03:39.336 ****** >ok: [undercloud] => {"ansible_facts": {"ceph_ansible_extra_vars": {"fetch_directory": "/var/lib/mistral/overcloud/ceph-ansible/fetch_dir", "ireallymeanit": "yes"}}, "changed": false} > >TASK [generate ceph-ansible extra vars] **************************************** >Friday 21 September 2018 08:20:16 -0400 (0:00:00.050) 0:03:39.386 ****** >changed: [undercloud] => {"changed": true, "checksum": "736efc435c358cb150f966050ebc3ab5061819cb", "dest": "/var/lib/mistral/overcloud/ceph-ansible/extra_vars.yml", "gid": 42430, "group": "mistral", "md5sum": "2bc808d342a6452fceb69c11f7bc8c1e", "mode": "0644", "owner": "mistral", "size": 88, "src": "/tmp/ansible-/ansible-tmp-1537532416.93-234889957581394/source", "state": "file", "uid": 42430} > >TASK [generate nodes-uuid data file] ******************************************* >Friday 21 September 2018 08:20:17 -0400 (0:00:00.353) 0:03:39.739 ****** >changed: [undercloud] => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/mistral/overcloud/ceph-ansible/nodes_uuid_data.json", "gid": 42430, "group": "mistral", "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0644", "owner": "mistral", "size": 2, "src": "/tmp/ansible-/ansible-tmp-1537532417.28-162069697035443/source", "state": "file", "uid": 42430} > >TASK [generate nodes-uuid playbook] ******************************************** >Friday 21 September 2018 08:20:17 -0400 (0:00:00.351) 0:03:40.090 ****** >changed: [undercloud] => {"changed": true, "checksum": "6295759c7c940d5f447c8f2aa21ca4b89c07424a", "dest": "/var/lib/mistral/overcloud/ceph-ansible/nodes_uuid_playbook.yml", "gid": 42430, "group": "mistral", "md5sum": "3e3401cf992ddfe2f64ba89ba32d2941", "mode": "0644", "owner": "mistral", "size": 527, "src": "/tmp/ansible-/ansible-tmp-1537532417.64-24641415896708/source", "state": "file", "uid": 42430} > >TASK [run nodes-uuid] ********************************************************** >Friday 21 September 2018 08:20:17 -0400 (0:00:00.339) 0:03:40.430 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible params from Heat] *************************************** >Friday 21 September 2018 08:20:17 -0400 (0:00:00.035) 0:03:40.465 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible playbooks] ********************************************** >Friday 21 September 2018 08:20:18 -0400 (0:00:00.033) 0:03:40.499 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible command] ************************************************ >Friday 21 September 2018 08:20:18 -0400 (0:00:00.038) 0:03:40.537 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [run ceph-ansible] ******************************************************** >Friday 21 September 2018 08:20:18 -0400 (0:00:00.032) 0:03:40.570 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible group vars mgrs] **************************************** >Friday 21 September 2018 08:20:18 -0400 (0:00:00.036) 0:03:40.606 ****** >ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_mgrs": {"ceph_mgr_docker_extra_env": "-e MGR_DASHBOARD=0"}}, "changed": false} > >TASK [generate ceph-ansible group vars mgrs] *********************************** >Friday 21 September 2018 08:20:18 -0400 (0:00:00.047) 0:03:40.654 ****** >changed: [undercloud] => {"changed": true, "checksum": "06d130f3471f2ac09bb0161450878cf64bafd8af", "dest": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/mgrs.yml", "gid": 42430, "group": "mistral", "md5sum": "0d3c03a4186ad82120a728e0470a49d9", "mode": "0644", "owner": "mistral", "size": 46, "src": "/tmp/ansible-/ansible-tmp-1537532418.2-153575066141455/source", "state": "file", "uid": 42430} > >TASK [set ceph-ansible group vars mons] **************************************** >Friday 21 September 2018 08:20:18 -0400 (0:00:00.364) 0:03:41.018 ****** >ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_mons": {"admin_secret": "AQC93KRbAAAAABAAEqpN2MbYxyEdU7ZDAan4TA==", "monitor_secret": "AQC93KRbAAAAABAAr7ULRVGq1MAPBiWDa06UVA=="}}, "changed": false} > >TASK [generate ceph-ansible group vars mons] *********************************** >Friday 21 September 2018 08:20:18 -0400 (0:00:00.054) 0:03:41.073 ****** >changed: [undercloud] => {"changed": true, "checksum": "266857ad06f19f25bd99eafec436d22042c0677c", "dest": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/mons.yml", "gid": 42430, "group": "mistral", "md5sum": "8a5c620bcdeb365ef2f05e3da8a0e818", "mode": "0644", "owner": "mistral", "size": 112, "src": "/tmp/ansible-/ansible-tmp-1537532418.63-221655250839325/source", "state": "file", "uid": 42430} > >TASK [set_fact] **************************************************************** >Friday 21 September 2018 08:20:18 -0400 (0:00:00.361) 0:03:41.435 ****** >ok: [undercloud] => {"ansible_facts": {"log_file": "tripleo-container-image-prepare.log"}, "changed": false} > > >TASK [Create temp file for prepare parameter] ********************************** >Friday 21 September 2018 08:20:19 -0400 (0:00:00.061) 0:03:41.497 ****** >changed: [undercloud] => {"changed": true, "gid": 42430, "group": "mistral", "mode": "0600", "owner": "mistral", "path": "/tmp/ansible.T3HgDq-prepare-param", "size": 0, "state": "file", "uid": 42430} > >TASK [Create temp file for role data] ****************************************** >Friday 21 September 2018 08:20:19 -0400 (0:00:00.360) 0:03:41.857 ****** >changed: [undercloud] => {"changed": true, "gid": 42430, "group": "mistral", "mode": "0600", "owner": "mistral", "path": "/tmp/ansible.BHFvYd-role-data", "size": 0, "state": "file", "uid": 42430} > >TASK [Write ContainerImagePrepare parameter file] ****************************** >Friday 21 September 2018 08:20:19 -0400 (0:00:00.182) 0:03:42.040 ****** >changed: [undercloud] => {"changed": true, "checksum": "ee4783631076c19990a802865b8c0a3c25baeba1", "dest": "/tmp/ansible.T3HgDq-prepare-param", "gid": 42430, "group": "mistral", "md5sum": "be85bccfbd1e18c6ab1a8370c364fe60", "mode": "0600", "owner": "mistral", "size": 11187, "src": "/tmp/ansible-/ansible-tmp-1537532419.59-152887983905911/source", "state": "file", "uid": 42430} > >TASK [Write role data file] **************************************************** >Friday 21 September 2018 08:20:19 -0400 (0:00:00.348) 0:03:42.388 ****** >changed: [undercloud] => {"changed": true, "checksum": "f4bd6ad5174a88673a5da2c3b6c2de3827e06b7b", "dest": "/tmp/ansible.BHFvYd-role-data", "gid": 42430, "group": "mistral", "md5sum": "d3ae9b59dea6998091971def17a31a6a", "mode": "0600", "owner": "mistral", "size": 13059, "src": "/tmp/ansible-/ansible-tmp-1537532419.93-86762518896174/source", "state": "file", "uid": 42430} > >TASK [Run tripleo-container-image-prepare] ************************************* >Friday 21 September 2018 08:20:20 -0400 (0:00:00.337) 0:03:42.726 ****** > [WARNING]: Consider using 'become', 'become_method', and 'become_user' rather >than running sudo >changed: [undercloud] => {"changed": true, "cmd": "sudo /usr/bin/tripleo-container-image-prepare --roles-file /tmp/ansible.BHFvYd-role-data --environment-file /tmp/ansible.T3HgDq-prepare-param --cleanup partial 2> tripleo-container-image-prepare.log", "delta": "0:00:01.530665", "end": "2018-09-21 08:20:21.923173", "rc": 0, "start": "2018-09-21 08:20:20.392508", "stderr": "", "stderr_lines": [], "stdout": "null\n...", "stdout_lines": ["null", "..."]} > >TASK [Delete param file] ******************************************************* >Friday 21 September 2018 08:20:21 -0400 (0:00:01.714) 0:03:44.441 ****** >changed: [undercloud] => {"changed": true, "path": "/tmp/ansible.T3HgDq-prepare-param", "state": "absent"} > >TASK [Delete role file] ******************************************************** >Friday 21 September 2018 08:20:22 -0400 (0:00:00.178) 0:03:44.619 ****** >changed: [undercloud] => {"changed": true, "path": "/tmp/ansible.BHFvYd-role-data", "state": "absent"} > >TASK [set ceph-ansible group vars clients] ************************************* >Friday 21 September 2018 08:20:22 -0400 (0:00:00.193) 0:03:44.813 ****** >ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_clients": {}}, "changed": false} > >TASK [generate ceph-ansible group vars clients] ******************************** >Friday 21 September 2018 08:20:22 -0400 (0:00:00.050) 0:03:44.864 ****** >changed: [undercloud] => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/clients.yml", "gid": 42430, "group": "mistral", "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0644", "owner": "mistral", "size": 2, "src": "/tmp/ansible-/ansible-tmp-1537532422.41-203956276650032/source", "state": "file", "uid": 42430} > >TASK [set ceph-ansible group vars osds] **************************************** >Friday 21 September 2018 08:20:22 -0400 (0:00:00.338) 0:03:45.202 ****** >ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_osds": {"devices": ["/dev/vdb", "/dev/vdc", "/dev/vdd", "/dev/vde", "/dev/vdf"], "journal_size": 512, "osd_objectstore": "filestore", "osd_scenario": "collocated"}}, "changed": false} > >TASK [generate ceph-ansible group vars osds] *********************************** >Friday 21 September 2018 08:20:22 -0400 (0:00:00.050) 0:03:45.253 ****** >changed: [undercloud] => {"changed": true, "checksum": "a209fd8d503be2b45dc87935a930c08a563088cb", "dest": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/osds.yml", "gid": 42430, "group": "mistral", "md5sum": "114fe63af169ecb1b28b951266282ba7", "mode": "0644", "owner": "mistral", "size": 134, "src": "/tmp/ansible-/ansible-tmp-1537532422.8-117023361640018/source", "state": "file", "uid": 42430} > >PLAY [Overcloud deploy step tasks for 1] *************************************** > >PLAY [Overcloud common deploy step tasks 1] ************************************ > >TASK [Create /var/lib/tripleo-config directory] ******************************** >Friday 21 September 2018 08:20:23 -0400 (0:00:00.360) 0:03:45.614 ****** >changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [Write the puppet step_config manifest] *********************************** >Friday 21 September 2018 08:20:23 -0400 (0:00:00.297) 0:03:45.911 ****** >changed: [controller-0] => {"changed": true, "checksum": "8cc2a8154fe8261f1ad4dbbf7151db6f5d016a04", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "ea4a5c9cd9eca53a460514b61dc3d011", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1631, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532423.55-69965300818223/source", "state": "file", "uid": 0} >changed: [compute-0] => {"changed": true, "checksum": "0b7508ea11b5540c4e639bbb30162d8fa1fc1cc5", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "43135571b1950c38bbce98ace30272ac", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1641, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532423.6-54878786086174/source", "state": "file", "uid": 0} >changed: [ceph-0] => {"changed": true, "checksum": "44355f328588ff032fb9d91a3fdf2a8f427f6ac1", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "d14bfa59823532755440579b4b515901", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1589, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532423.6-72623119226078/source", "state": "file", "uid": 0} > >TASK [Create /var/lib/docker-puppet] ******************************************* >Friday 21 September 2018 08:20:24 -0400 (0:00:00.732) 0:03:46.643 ****** >changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} > >TASK [Write docker-puppet.json file] ******************************************* >Friday 21 September 2018 08:20:24 -0400 (0:00:00.380) 0:03:47.023 ****** >changed: [ceph-0] => {"changed": true, "checksum": "1d208f0d6d0218582b65c46e7c4086a30f2fc158", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "58e2a5e28859a0a0dbafbbac0d16a013", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 234, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532424.73-243641066947890/source", "state": "file", "uid": 0} >changed: [compute-0] => {"changed": true, "checksum": "718d49e512e42e8d64857e24d4977852fe0a6ed1", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "0c6763ad8da7b3c0acfde69fe32d9bf6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2314, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532424.74-48910125232775/source", "state": "file", "uid": 0} >changed: [controller-0] => {"changed": true, "checksum": "1deeae504c884c47c1c934b0fc3e8fba9bad42d1", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "fd2533b685bbe5b7362907052f3ba22f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 13394, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532424.72-18853638962636/source", "state": "file", "uid": 0} > >TASK [Create /var/lib/docker-config-scripts] *********************************** >Friday 21 September 2018 08:20:25 -0400 (0:00:00.761) 0:03:47.785 ****** >changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >Friday 21 September 2018 08:20:25 -0400 (0:00:00.365) 0:03:48.151 ****** >ok: [controller-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >ok: [compute-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >ok: [ceph-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} > >TASK [Write docker config scripts] ********************************************* >Friday 21 September 2018 08:20:26 -0400 (0:00:00.404) 0:03:48.556 ****** >changed: [compute-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": true, "checksum": "72a319c9e7cf5c1343a0c92282d91569626d2bc2", "dest": "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh", "gid": 0, "group": "root", "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "md5sum": "48f516886d4b7523fff55b054d1b0457", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 599, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532426.19-190814915262168/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "(cellv2) Running cell_v2 host discovery"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr "," " "); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +"%s") + ${timeout} ))\necho "(cellv2) Waiting ${timeout} seconds for hosts to register"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo "(cellv2) compute node $host has registered"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in "${!discoverable_hosts[@]}"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo "(cellv2) compute node $host has not registered"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +"%s") ))\n if (( $finished == 1 )); then\n echo "(cellv2) All nodes registered"\n break\n elif (( $remaining <= 0 )); then\n echo "(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless"\n echo "(cellv2) Expected host list:" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\n echo "(cellv2) Detected host list:" $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\' | sort -u | tr \'\\n\', \' \')\n break\n else\n echo "(cellv2) Waiting ${remaining} seconds for hosts to register"\n sleep $loop_wait\n fi\ndone\necho "(cellv2) Running host discovery..."\nsu nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose"\n', 'mode': u'0700'}, 'key': u'nova_api_discover_hosts.sh'}) => {"changed": true, "checksum": "4e350e3d48cba294f2ccab34eb03c1dee23e7f82", "dest": "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh", "gid": 0, "group": "root", "item": {"key": "nova_api_discover_hosts.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}}, "md5sum": "ed5dca102b28b4f992943612dee2dced", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2318, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532426.16-273117052124535/source", "state": "file", "uid": 0} >changed: [compute-0] => (item={'value': {'content': u'#!/usr/bin/env python\n#\n# Copyright 2018 Red Hat Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the "License"); you may\n# not use this file except in compliance with the License. You may obtain\n# a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations\n# under the License.\nfrom __future__ import print_function\nimport logging\nimport os\nimport pwd\nimport stat\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\nLOG = logging.getLogger(\'nova_statedir\')\n\n\nclass PathManager(object):\n """Helper class to manipulate ownership of a given path"""\n def __init__(self, path):\n self.path = path\n self._update()\n\n def _update(self):\n statinfo = os.stat(self.path)\n self.is_dir = stat.S_ISDIR(statinfo.st_mode)\n self.uid = statinfo.st_uid\n self.gid = statinfo.st_gid\n\n def __str__(self):\n return "uid: {} gid: {} path: {}{}".format(\n self.uid,\n self.gid,\n self.path,\n \'/\' if self.is_dir else \'\'\n )\n\n def has_owner(self, uid, gid):\n return self.uid == uid and self.gid == gid\n\n def has_either(self, uid, gid):\n return self.uid == uid or self.gid == gid\n\n def chown(self, uid, gid):\n target_uid = -1\n target_gid = -1\n if self.uid != uid:\n target_uid = uid\n if self.gid != gid:\n target_gid = gid\n if (target_uid, target_gid) != (-1, -1):\n LOG.info(\'Changing ownership of %s from %d:%d to %d:%d\',\n self.path,\n self.uid,\n self.gid,\n self.uid if target_uid == -1 else target_uid,\n self.gid if target_gid == -1 else target_gid)\n os.chown(self.path, target_uid, target_gid)\n self._update()\n else:\n LOG.info(\'Ownership of %s already %d:%d\',\n self.path,\n uid,\n gid)\n\n\nclass NovaStatedirOwnershipManager(object):\n """Class to manipulate the ownership of the nova statedir (/var/lib/nova).\n\n The nova uid/gid differ on the host and container images. An upgrade\n that switches from host systemd services to docker requires a change in\n ownership. Previously this was a naive recursive chown, however this\n causes issues if nova instance are shared via an NFS mount: any open\n filehandles in qemu/libvirt fail with an I/O error (LP1778465).\n\n Instead the upgrade/FFU ansible tasks now lay down a marker file when\n stopping and disabling the host systemd services. We use this file to\n determine the host nova uid/gid. We then walk the tree and update any\n files that have the host uid/gid to the docker nova uid/gid. As files\n owned by root/qemu etc... are ignored this avoids the issues with open\n filehandles. The marker is removed once the tree has been walked.\n\n For subsequent runs, or for a new deployment, we simply ensure that the\n docker nova user/group owns all directories. This is required as the\n directories are created with root ownership in host_prep_tasks (the\n docker nova uid/gid is not known in this context).\n """\n def __init__(self, statedir, upgrade_marker=\'upgrade_marker\',\n nova_user=\'nova\'):\n self.statedir = statedir\n self.nova_user = nova_user\n\n self.upgrade_marker_path = os.path.join(statedir, upgrade_marker)\n self.upgrade = os.path.exists(self.upgrade_marker_path)\n\n self.target_uid, self.target_gid = self._get_nova_ids()\n self.previous_uid, self.previous_gid = self._get_previous_nova_ids()\n self.id_change = (self.target_uid, self.target_gid) != \\\n (self.previous_uid, self.previous_gid)\n\n def _get_nova_ids(self):\n nova_uid, nova_gid = pwd.getpwnam(self.nova_user)[2:4]\n return nova_uid, nova_gid\n\n def _get_previous_nova_ids(self):\n if self.upgrade:\n statinfo = os.stat(self.upgrade_marker_path)\n return statinfo.st_uid, statinfo.st_gid\n else:\n return self._get_nova_ids()\n\n def _walk(self, top):\n for f in os.listdir(top):\n pathname = os.path.join(top, f)\n\n if pathname == self.upgrade_marker_path:\n continue\n\n pathinfo = PathManager(pathname)\n LOG.info("Checking %s", pathinfo)\n if pathinfo.is_dir:\n # Always chown the directories\n pathinfo.chown(self.target_uid, self.target_gid)\n self._walk(pathname)\n elif self.id_change:\n # Only chown files if it\'s an upgrade and the file is owned by\n # the host nova uid/gid\n pathinfo.chown(\n self.target_uid if pathinfo.uid == self.previous_uid\n else pathinfo.uid,\n self.target_gid if pathinfo.gid == self.previous_gid\n else pathinfo.gid\n )\n\n def run(self):\n LOG.info(\'Applying nova statedir ownership\')\n LOG.info(\'Target ownership for %s: %d:%d\',\n self.statedir,\n self.target_uid,\n self.target_gid)\n\n pathinfo = PathManager(self.statedir)\n LOG.info("Checking %s", pathinfo)\n pathinfo.chown(self.target_uid, self.target_gid)\n\n self._walk(self.statedir)\n\n if self.upgrade:\n LOG.info(\'Removing upgrade_marker %s\',\n self.upgrade_marker_path)\n os.unlink(self.upgrade_marker_path)\n\n LOG.info(\'Nova statedir ownership complete\')\n\nif __name__ == \'__main__\':\n NovaStatedirOwnershipManager(\'/var/lib/nova\').run()\n', 'mode': u'0700'}, 'key': u'nova_statedir_ownership.py'}) => {"changed": true, "checksum": "052884875dafcd3e79ee18bebaed25f6994a1c37", "dest": "/var/lib/docker-config-scripts/nova_statedir_ownership.py", "gid": 0, "group": "root", "item": {"key": "nova_statedir_ownership.py", "value": {"content": "#!/usr/bin/env python\n#\n# Copyright 2018 Red Hat Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may\n# not use this file except in compliance with the License. You may obtain\n# a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations\n# under the License.\nfrom __future__ import print_function\nimport logging\nimport os\nimport pwd\nimport stat\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\nLOG = logging.getLogger('nova_statedir')\n\n\nclass PathManager(object):\n \"\"\"Helper class to manipulate ownership of a given path\"\"\"\n def __init__(self, path):\n self.path = path\n self._update()\n\n def _update(self):\n statinfo = os.stat(self.path)\n self.is_dir = stat.S_ISDIR(statinfo.st_mode)\n self.uid = statinfo.st_uid\n self.gid = statinfo.st_gid\n\n def __str__(self):\n return \"uid: {} gid: {} path: {}{}\".format(\n self.uid,\n self.gid,\n self.path,\n '/' if self.is_dir else ''\n )\n\n def has_owner(self, uid, gid):\n return self.uid == uid and self.gid == gid\n\n def has_either(self, uid, gid):\n return self.uid == uid or self.gid == gid\n\n def chown(self, uid, gid):\n target_uid = -1\n target_gid = -1\n if self.uid != uid:\n target_uid = uid\n if self.gid != gid:\n target_gid = gid\n if (target_uid, target_gid) != (-1, -1):\n LOG.info('Changing ownership of %s from %d:%d to %d:%d',\n self.path,\n self.uid,\n self.gid,\n self.uid if target_uid == -1 else target_uid,\n self.gid if target_gid == -1 else target_gid)\n os.chown(self.path, target_uid, target_gid)\n self._update()\n else:\n LOG.info('Ownership of %s already %d:%d',\n self.path,\n uid,\n gid)\n\n\nclass NovaStatedirOwnershipManager(object):\n \"\"\"Class to manipulate the ownership of the nova statedir (/var/lib/nova).\n\n The nova uid/gid differ on the host and container images. An upgrade\n that switches from host systemd services to docker requires a change in\n ownership. Previously this was a naive recursive chown, however this\n causes issues if nova instance are shared via an NFS mount: any open\n filehandles in qemu/libvirt fail with an I/O error (LP1778465).\n\n Instead the upgrade/FFU ansible tasks now lay down a marker file when\n stopping and disabling the host systemd services. We use this file to\n determine the host nova uid/gid. We then walk the tree and update any\n files that have the host uid/gid to the docker nova uid/gid. As files\n owned by root/qemu etc... are ignored this avoids the issues with open\n filehandles. The marker is removed once the tree has been walked.\n\n For subsequent runs, or for a new deployment, we simply ensure that the\n docker nova user/group owns all directories. This is required as the\n directories are created with root ownership in host_prep_tasks (the\n docker nova uid/gid is not known in this context).\n \"\"\"\n def __init__(self, statedir, upgrade_marker='upgrade_marker',\n nova_user='nova'):\n self.statedir = statedir\n self.nova_user = nova_user\n\n self.upgrade_marker_path = os.path.join(statedir, upgrade_marker)\n self.upgrade = os.path.exists(self.upgrade_marker_path)\n\n self.target_uid, self.target_gid = self._get_nova_ids()\n self.previous_uid, self.previous_gid = self._get_previous_nova_ids()\n self.id_change = (self.target_uid, self.target_gid) != \\\n (self.previous_uid, self.previous_gid)\n\n def _get_nova_ids(self):\n nova_uid, nova_gid = pwd.getpwnam(self.nova_user)[2:4]\n return nova_uid, nova_gid\n\n def _get_previous_nova_ids(self):\n if self.upgrade:\n statinfo = os.stat(self.upgrade_marker_path)\n return statinfo.st_uid, statinfo.st_gid\n else:\n return self._get_nova_ids()\n\n def _walk(self, top):\n for f in os.listdir(top):\n pathname = os.path.join(top, f)\n\n if pathname == self.upgrade_marker_path:\n continue\n\n pathinfo = PathManager(pathname)\n LOG.info(\"Checking %s\", pathinfo)\n if pathinfo.is_dir:\n # Always chown the directories\n pathinfo.chown(self.target_uid, self.target_gid)\n self._walk(pathname)\n elif self.id_change:\n # Only chown files if it's an upgrade and the file is owned by\n # the host nova uid/gid\n pathinfo.chown(\n self.target_uid if pathinfo.uid == self.previous_uid\n else pathinfo.uid,\n self.target_gid if pathinfo.gid == self.previous_gid\n else pathinfo.gid\n )\n\n def run(self):\n LOG.info('Applying nova statedir ownership')\n LOG.info('Target ownership for %s: %d:%d',\n self.statedir,\n self.target_uid,\n self.target_gid)\n\n pathinfo = PathManager(self.statedir)\n LOG.info(\"Checking %s\", pathinfo)\n pathinfo.chown(self.target_uid, self.target_gid)\n\n self._walk(self.statedir)\n\n if self.upgrade:\n LOG.info('Removing upgrade_marker %s',\n self.upgrade_marker_path)\n os.unlink(self.upgrade_marker_path)\n\n LOG.info('Nova statedir ownership complete')\n\nif __name__ == '__main__':\n NovaStatedirOwnershipManager('/var/lib/nova').run()\n", "mode": "0700"}}, "md5sum": "c8d51232f071c7b1fef053299a1b66c0", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6075, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532426.7-20803406018655/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "Check if secret already exists"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo "Failed to check secrets, check if Barbican in enabled and responding properly"\n exit $rc;\nfi\nif [ -z "$secret_href" ]; then\n echo "Create new secret"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type="application/octet-stream" --algorithm aes --bit-length 256 --mode ctr key -f value -c "Order href")\nfi\n', 'mode': u'0700'}, 'key': u'create_swift_secret.sh'}) => {"changed": true, "checksum": "e77b96beec241bb84928d298a778521376225c0d", "dest": "/var/lib/docker-config-scripts/create_swift_secret.sh", "gid": 0, "group": "root", "item": {"key": "create_swift_secret.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}}, "md5sum": "9277d70c2fd62961998c5fce0a8aeee2", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1125, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532426.69-187871585238304/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": true, "checksum": "72a319c9e7cf5c1343a0c92282d91569626d2bc2", "dest": "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh", "gid": 0, "group": "root", "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "md5sum": "48f516886d4b7523fff55b054d1b0457", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 599, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532427.23-82449161196007/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho "retrieve key_id"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ "$secret_href" ]; then\n echo "set key_id in keymaster.conf"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c "Secret href")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo "no key, wait for $loop_wait and check again"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho "Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly"\nexit 1\n', 'mode': u'0700'}, 'key': u'set_swift_keymaster_key_id.sh'}) => {"changed": true, "checksum": "9c2474fa6e4a8869674b689206eb1a1658a28fc6", "dest": "/var/lib/docker-config-scripts/set_swift_keymaster_key_id.sh", "gid": 0, "group": "root", "item": {"key": "set_swift_keymaster_key_id.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}}, "md5sum": "054225f8957e4457ef2103ce24d44b04", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1275, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532427.72-246487119049140/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-\'\'}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho "{\\"step\\": ${STEP}}" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e "${CONFIG}"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n', 'mode': u'0700'}, 'key': u'docker_puppet_apply.sh'}) => {"changed": true, "checksum": "93afaa6df42c9ead7768b295fa901f83ae1b39ef", "dest": "/var/lib/docker-config-scripts/docker_puppet_apply.sh", "gid": 0, "group": "root", "item": {"key": "docker_puppet_apply.sh", "value": {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}}, "md5sum": "709b2caef95cc7486f9b851414e71133", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 653, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532428.22-49296681813332/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == "default" {print $4}\')\nif [ "$DEFID" ]; then\n echo "(cellv2) Updating default cell_v2 cell $DEFID"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default"\nelse\n echo "(cellv2) Creating default cell_v2 cell"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 create_cell --name=default"\nfi\n', 'mode': u'0700'}, 'key': u'nova_api_ensure_default_cell.sh'}) => {"changed": true, "checksum": "0a839197c2fa15204014befb1c771a17aea5bdd1", "dest": "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh", "gid": 0, "group": "root", "item": {"key": "nova_api_ensure_default_cell.sh", "value": {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}}, "md5sum": "12a4a82656ddaae342942097b752d9db", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 442, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532428.72-110409871247687/source", "state": "file", "uid": 0} > >TASK [Set docker_config_default fact] ****************************************** >Friday 21 September 2018 08:20:29 -0400 (0:00:03.145) 0:03:51.701 ****** >ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > >TASK [Set docker_startup_configs_with_default fact] **************************** >Friday 21 September 2018 08:20:29 -0400 (0:00:00.204) 0:03:51.906 ****** >ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > >TASK [Write docker-container-startup-configs] ********************************** >Friday 21 September 2018 08:20:30 -0400 (0:00:00.659) 0:03:52.565 ****** >changed: [controller-0] => {"changed": true, "checksum": "45e5fed1b92d638e6758da0740c95af8ea779cd4", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "0ea008d0371d8cc5db6a06f03067c6a3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 105523, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532430.13-20494600297202/source", "state": "file", "uid": 0} >changed: [ceph-0] => {"changed": true, "checksum": "6bb1fdad708c26cff485c749874b2b9b26e98b18", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "1a28778686afe4d2414b90804384c497", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1055, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532430.19-162139788784985/source", "state": "file", "uid": 0} >changed: [compute-0] => {"changed": true, "checksum": "a3cfd500942bca28c365ef8e45429a7a093682a8", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "04fe359b0d4abf8436349178dced42e8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 12300, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532430.16-125459508806117/source", "state": "file", "uid": 0} > >TASK [Write per-step docker-container-startup-configs] ************************* >Friday 21 September 2018 08:20:30 -0400 (0:00:00.625) 0:03:53.190 ****** >changed: [compute-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": {"key": "step_1", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532430.79-242699823777111/source", "state": "file", "uid": 0} >changed: [ceph-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": {"key": "step_1", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532430.82-127503593646630/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'cinder_volume_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_image_tag': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_data_ownership': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'command': [u'chown', u'-R', u'mysql:', u'/var/lib/mysql'], 'user': u'root', 'volumes': [u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'redis_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_bootstrap': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'DB_MAX_TIMEOUT=60', u'DB_CLUSTERCHECK_PASSWORD=01uMEtrcy1XQLgnZ0spBcEeFG', u'DB_ROOT_PASSWORD=VmByi3iDWE'], 'command': [u'bash', u'-ec', u'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'${DB_CLUSTERCHECK_PASSWORD}\';"\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" shutdown'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'haproxy_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'cinder_backup_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_bootstrap': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'RABBITMQ_CLUSTER_COOKIE=bo2CgGlbFlVu6tTAeUPw'], 'volumes': [u'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro', u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/var/lib/rabbitmq:/var/lib/rabbitmq'], 'net': u'host', 'privileged': False}, 'memcached': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}}, 'key': u'step_1'}) => {"changed": true, "checksum": "54d85005df95b7b2528f858dd51644c59924785b", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": {"key": "step_1", "value": {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=01uMEtrcy1XQLgnZ0spBcEeFG", "DB_ROOT_PASSWORD=VmByi3iDWE"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=bo2CgGlbFlVu6tTAeUPw"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}}, "md5sum": "3de1ca9ec1dfbd8ff986b217fafbabdc", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6913, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532430.81-122403995960311/source", "state": "file", "uid": 0} >changed: [ceph-0] => (item={'value': {}, 'key': u'step_3'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532431.32-215391802018124/source", "state": "file", "uid": 0} >changed: [compute-0] => (item={'value': {'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'nova_statedir_owner': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1', 'command': u'/docker-config-scripts/nova_statedir_ownership.py', 'user': u'root', 'volumes': [u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/docker-config-scripts/:/docker-config-scripts/'], 'detach': False, 'privileged': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_libvirt': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/var/log/containers/libvirt:/var/log/libvirt', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro', u'/var/lib/vhost_sockets:/var/lib/vhost_sockets', u'/sys/fs/selinux:/sys/fs/selinux'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_virtlogd': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/etc/libvirt/qemu:/etc/libvirt/qemu:ro', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu'], 'net': u'host', 'privileged': True, 'restart': u'always'}}, 'key': u'step_3'}) => {"changed": true, "checksum": "b16ccf42d6ef6141d7474bbf9a5bfc479465fb96", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_statedir_owner": {"command": "/docker-config-scripts/nova_statedir_ownership.py", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/lib/nova:/var/lib/nova:shared", "/var/lib/docker-config-scripts/:/docker-config-scripts/"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}}, "md5sum": "4a8e611a5a9c7d3b1fa8124765368fb1", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 5441, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532431.32-21916859596904/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'nova_placement': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd', u'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'restart': u'always'}, 'swift_rsync_fix': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'sed -i "/pid file/d" /var/lib/kolla/config_files/src/etc/rsyncd.conf'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:rw'], 'net': u'host', 'detach': False}, 'nova_db_sync': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'heat_engine_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_copy_rings': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1', 'detach': False, 'command': [u'/bin/bash', u'-c', u'cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw', u'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro']}, 'nova_api_ensure_default_cell': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro'], 'net': u'host', 'detach': False}, 'keystone_cron': {'start_order': 4, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/bin/bash', u'-c', u'/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'panko_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/panko/etc/panko:/etc/panko:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_backup_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'nova_api_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'keystone_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/usr/bin/bootstrap_host_exec', u'keystone', u'/usr/local/bin/kolla_start'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'detach': False, 'privileged': False}, 'ceilometer_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'start_order': 0, 'volumes': [u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'user': u'root'}, 'keystone': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1', 'command': u'/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'cinder_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_api', u"su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'nova_api_map_cell0': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'glance_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': u"/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'detach': False, 'privileged': False}, 'neutron_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1', 'command': [u'/usr/bin/bootstrap_host_exec', u'neutron_api', u'neutron-db-manage', u'upgrade', u'heads'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro', u'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'sahara_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'detach': False, 'privileged': False}, 'keystone_bootstrap': {'action': u'exec', 'start_order': 3, 'command': [u'keystone', u'/usr/bin/bootstrap_host_exec', u'keystone', u'keystone-manage', u'bootstrap', u'--bootstrap-password', u'wIdMrXYZVQy05wYJArw8Vja2H'], 'user': u'root'}, 'horizon': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'ENABLE_IRONIC=yes', u'ENABLE_MANILA=yes', u'ENABLE_HEAT=yes', u'ENABLE_MISTRAL=yes', u'ENABLE_OCTAVIA=yes', u'ENABLE_SAHARA=yes', u'ENABLE_CLOUDKITTY=no', u'ENABLE_FREEZER=no', u'ENABLE_FWAAS=no', u'ENABLE_KARBOR=no', u'ENABLE_DESIGNATE=no', u'ENABLE_MAGNUM=no', u'ENABLE_MURANO=no', u'ENABLE_NEUTRON_LBAAS=no', u'ENABLE_SEARCHLIGHT=no', u'ENABLE_SENLIN=no', u'ENABLE_SOLUM=no', u'ENABLE_TACKER=no', u'ENABLE_TROVE=no', u'ENABLE_WATCHER=no', u'ENABLE_ZAQAR=no', u'ENABLE_ZUN=no'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/www/:/var/www/:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_setup_srv': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1', 'command': [u'chown', u'-R', u'swift:', u'/srv/node'], 'user': u'root', 'volumes': [u'/srv/node:/srv/node']}}, 'key': u'step_3'}) => {"changed": true, "checksum": "d429bf901ad018fe32eaabe240d6aa5ccfc14327", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-20.1", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "wIdMrXYZVQy05wYJArw8Vja2H"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_rsync_fix": {"command": ["/bin/bash", "-c", "sed -i \"/pid file/d\" /var/lib/kolla/config_files/src/etc/rsyncd.conf"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:rw"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1", "user": "root", "volumes": ["/srv/node:/srv/node"]}}}, "md5sum": "6192631e31f8433b59a15d954f399330", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 22191, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532431.35-218947307653849/source", "state": "file", "uid": 0} >changed: [ceph-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": {"key": "step_2", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532431.8-117440688245770/source", "state": "file", "uid": 0} >changed: [compute-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": {"key": "step_2", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532431.84-3220440476837/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'gnocchi_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/log/gnocchi'], 'user': u'root', 'volumes': [u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd']}, 'mysql_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/mysql:/var/lib/mysql:rw'], 'net': u'host', 'detach': False}, 'gnocchi_init_lib': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/lib/gnocchi'], 'user': u'root', 'volumes': [u'/var/lib/gnocchi:/var/lib/gnocchi']}, 'cinder_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'user': u'root'}, 'create_dnsmasq_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-20.1', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'panko_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R panko:panko /var/log/panko'], 'user': u'root', 'volumes': [u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd']}, 'redis_init_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'config_volume': u'redis_init_bundle', 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_scheduler_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'user': u'root'}, 'glance_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R glance:glance /var/log/glance'], 'privileged': False, 'volumes': [u'/var/log/containers/glance:/var/log/glance'], 'user': u'root'}, 'clustercheck': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'restart': u'always'}, 'haproxy_init_bundle': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro', u'/etc/sysconfig:/etc/sysconfig:rw', u'/usr/libexec/iptables:/usr/libexec/iptables:ro', u'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False, 'privileged': True}, 'neutron_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R neutron:neutron /var/log/neutron'], 'privileged': False, 'volumes': [u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd'], 'user': u'root'}, 'mysql_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'config_volume': u'mysql', 'command': [u'/usr/bin/bootstrap_host_exec', u'mysql', u'if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo "galera-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'rabbitmq_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/bin/true:/bin/epmd'], 'net': u'host', 'detach': False}, 'nova_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd'], 'user': u'root'}, 'haproxy_restart_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1', 'config_volume': u'haproxy', 'command': [u'/usr/bin/bootstrap_host_exec', u'haproxy', u'if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo "haproxy-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'create_keepalived_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-20.1', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::l3_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'rabbitmq_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1', 'config_volume': u'rabbitmq', 'command': [u'/usr/bin/bootstrap_host_exec', u'rabbitmq', u'if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'horizon_fix_perms': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'user': u'root', 'volumes': [u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard']}, 'aodh_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R aodh:aodh /var/log/aodh'], 'user': u'root', 'volumes': [u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd']}, 'nova_metadata_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova'], 'user': u'root'}, 'redis_restart_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1', 'config_volume': u'redis', 'command': [u'/usr/bin/bootstrap_host_exec', u'redis', u'if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo "redis-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'heat_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R heat:heat /var/log/heat'], 'user': u'root', 'volumes': [u'/var/log/containers/heat:/var/log/heat']}, 'nova_placement_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'start_order': 1, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd'], 'user': u'root'}, 'keystone_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R keystone:keystone /var/log/keystone'], 'start_order': 1, 'volumes': [u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd'], 'user': u'root'}}, 'key': u'step_2'}) => {"changed": true, "checksum": "1984a480e43cf6fc07808749297c48cff5113c8d", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": {"key": "step_2", "value": {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-20.1", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-20.1", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}}, "md5sum": "e07c5bee30bffd1f52130784290f933b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 17331, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532431.91-106701342051688/source", "state": "file", "uid": 0} >changed: [ceph-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": {"key": "step_5", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532432.3-150598262764883/source", "state": "file", "uid": 0} >changed: [compute-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": {"key": "step_5", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532432.38-38684444704522/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'cinder_volume_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'gnocchi_api': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'gnocchi_statsd': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_backup', u'if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo "openstack-cinder-backup restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_metricd': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_discover_hosts': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro'], 'net': u'host', 'detach': False}, 'ceilometer_gnocchi_upgrade': {'start_order': 99, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1', 'command': [u'/usr/bin/bootstrap_host_exec', u'ceilometer_agent_central', u"su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 30; done; exit 1'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_volume', u'if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'cinder_backup_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'gnocchi_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro'], 'net': u'host', 'detach': False, 'privileged': False}}, 'key': u'step_5'}) => {"changed": true, "checksum": "f306f176cdf6b2f223f21ac074856e655d8e6f4d", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": {"key": "step_5", "value": {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 30; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1", "net": "host", "privileged": false, "start_order": 99, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", "net": "host", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}}}, "md5sum": "352930c1472012ffea18a830900053d2", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 11741, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532432.44-4565198045217/source", "state": "file", "uid": 0} >changed: [ceph-0] => (item={'value': {'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": true, "checksum": "c3c59a2a87c07d2426869d6d9494bdaa9a72dd38", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": {"key": "step_4", "value": {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}}, "md5sum": "db46678973339100589a669426e1f1c5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 973, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532432.79-141392185875216/source", "state": "file", "uid": 0} >changed: [compute-0] => (item={'value': {'ceilometer_agent_compute': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/run/libvirt:/var/run/libvirt:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_libvirt_init_secret': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '8fedf068-bd95-11e8-ba69-5254006eda59' --base64 'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw=='"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt'], 'detach': False, 'privileged': False}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_migration_target': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ssh/:/host-ssh/:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_compute': {'ipc': u'host', 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/dev:/dev', u'/lib/modules:/lib/modules:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/libvirt:/var/lib/libvirt', u'/sys/class/net:/sys/class/net', u'/sys/bus/pci:/sys/bus/pci'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": true, "checksum": "3c0a498ba741526669d2c27503f3804fa665db25", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": {"key": "step_4", "value": {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '8fedf068-bd95-11e8-ba69-5254006eda59' --base64 'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}}, "md5sum": "96bf74c5bae2e001bf8e051536e23c9e", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6779, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532432.89-169709040709600/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'swift_container_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_evaluator': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'/run:/run'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'cinder_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_proxy': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/run:/run', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'restart': u'always'}, 'neutron_dhcp': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', u'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_metadata_agent': {'start_order': 10, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'ceilometer_agent_central': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'keystone_refresh': {'action': u'exec', 'start_order': 1, 'command': [u'keystone', u'pkill', u'--signal', u'USR1', u'httpd'], 'user': u'root'}, 'swift_account_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_notifier': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_consoleauth': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'glance_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_reaper': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'ceilometer_agent_notification': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_vnc_proxy': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_rsync': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'aodh_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_metadata': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_l3_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro', u'/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro', u'/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_conductor': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'sahara_api': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_container_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_expirer': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'heat_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'panko_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_listener': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_api': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'heat_api_cfn': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": true, "checksum": "bd1487a8ba8b84f37f1b14ab8f2c62a247a02016", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": {"key": "step_4", "value": {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}}, "md5sum": "e175e4a2e371b3752d91bb503b210d2c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 47273, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532432.99-132816842868728/source", "state": "file", "uid": 0} >changed: [ceph-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": {"key": "step_6", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532433.28-86461786473497/source", "state": "file", "uid": 0} >changed: [compute-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": {"key": "step_6", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532433.42-277726157270342/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": {"key": "step_6", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532433.56-238592089947737/source", "state": "file", "uid": 0} > >TASK [Create /var/lib/kolla/config_files directory] **************************** >Friday 21 September 2018 08:20:34 -0400 (0:00:03.669) 0:03:56.860 ****** >changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} > >TASK [Write kolla config json files] ******************************************* >Friday 21 September 2018 08:20:34 -0400 (0:00:00.337) 0:03:57.197 ****** >changed: [ceph-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": true, "checksum": "4c92019f9e75a1d5fd8ed0c534a1e2e37545fd52", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "4e44fe0987e7b03113435c6eed7ea3b5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 160, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532434.83-257002357929903/source", "state": "file", "uid": 0} >changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": true, "checksum": "4c92019f9e75a1d5fd8ed0c534a1e2e37545fd52", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "4e44fe0987e7b03113435c6eed7ea3b5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 160, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532434.82-114677916320344/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": true, "checksum": "4c92019f9e75a1d5fd8ed0c534a1e2e37545fd52", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "4e44fe0987e7b03113435c6eed7ea3b5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 160, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532434.96-168119418760164/source", "state": "file", "uid": 0} >changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': u'/var/lib/kolla/config_files/iscsid.json'}) => {"changed": true, "checksum": "40f9ceb4dd2fc8e9c51bf5152a0fa8e1d16d9137", "dest": "/var/lib/kolla/config_files/iscsid.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "md5sum": "9cd3c2dc0153b127d70141dadfabd12c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 175, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532435.36-4001950467728/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/keystone.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/keystone.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/keystone.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532435.47-167253362077681/source", "state": "file", "uid": 0} >changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/libvirtd', 'permissions': [{'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/nova_libvirt.json'}) => {"changed": true, "checksum": "b50cbe1f8b020aa49249248b57310f45005813b3", "dest": "/var/lib/kolla/config_files/nova_libvirt.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_libvirt.json", "value": {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "8356787bbcfcb5674a0bf2570719654a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 512, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532435.89-18208752403013/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/lib/cinder', 'recurse': True}, {'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_backup.json'}) => {"changed": true, "checksum": "0e697e31bdc439b99552bac9ffe0bab07f2af4a4", "dest": "/var/lib/kolla/config_files/cinder_backup.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_backup.json", "value": {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "8e107eb8f6989be8375a0ff2dd5b4d57", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 651, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532435.97-227006068110216/source", "state": "file", "uid": 0} >changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ssh/', 'owner': u'root', 'perm': u'0600', 'source': u'/host-ssh/ssh_host_*_key'}], 'command': u'/usr/sbin/sshd -D -p 2022'}, 'key': u'/var/lib/kolla/config_files/nova-migration-target.json'}) => {"changed": true, "checksum": "6a0a936a324363cd605e22c2327c17deb6dfbec2", "dest": "/var/lib/kolla/config_files/nova-migration-target.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova-migration-target.json", "value": {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}}, "md5sum": "161558d57b182ca70c6f9bbd7fcbda8a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 258, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532436.42-247628702407454/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/swift_proxy_tls_proxy.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532436.48-133384552628387/source", "state": "file", "uid": 0} >changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf'}, 'key': u'/var/lib/kolla/config_files/nova_virtlogd.json'}) => {"changed": true, "checksum": "8bbfe195e54ddfe481aaad9744174f7344d49681", "dest": "/var/lib/kolla/config_files/nova_virtlogd.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_virtlogd.json", "value": {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "786b962e2df778e3ce02b185ef93deac", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 193, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532436.95-12449427052266/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-auditor /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_auditor.json'}) => {"changed": true, "checksum": "413730fbf3f7935085cfda60cbc1535d8bce0caf", "dest": "/var/lib/kolla/config_files/swift_account_auditor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_auditor.json", "value": {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "dfccd947a56ceb6fa2b71c400281a365", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 200, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532437.03-236928217534955/source", "state": "file", "uid": 0} >changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": true, "checksum": "bd1c4f0459f65e7f67a969a89c74a8b8cdcfd9f8", "dest": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "md5sum": "3599cf6b814b7c628c2887996ca46138", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 261, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532437.47-196673975594022/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-replicator /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_replicator.json'}) => {"changed": true, "checksum": "2bf5ca66cb377c9fa3e6880f8b078d1312470cde", "dest": "/var/lib/kolla/config_files/swift_account_replicator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_replicator.json", "value": {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "d4a857b7e18f40f1cc1e6fd265c89770", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 203, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532437.56-136326782732201/source", "state": "file", "uid": 0} >changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/nova-compute ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/nova_compute.json'}) => {"changed": true, "checksum": "36b137044b0d21045af74db4b85d6847bbd5cdf7", "dest": "/var/lib/kolla/config_files/nova_compute.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_compute.json", "value": {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "da9ad479a10bc1d72f762413824e6639", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 577, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532437.99-185979034951727/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-notifier', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_notifier.json'}) => {"changed": true, "checksum": "e01d19d7f7cff24dfcc0d132b7d8ceabba199142", "dest": "/var/lib/kolla/config_files/aodh_notifier.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_notifier.json", "value": {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "5d4a748030a9a7476ccbd8902fb654fc", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 244, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532438.08-78535758368407/source", "state": "file", "uid": 0} >changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_compute.json'}) => {"changed": true, "checksum": "4b3e97fcd87fd70b35934d1ef908747f302a4d11", "dest": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "d91832a36a0ad3616a4e78c1af7d0db5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 237, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532438.5-70817892991166/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-scheduler ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_scheduler.json'}) => {"changed": true, "checksum": "23416bae23a2c08d2c534f76d19f8c4bad40ee92", "dest": "/var/lib/kolla/config_files/nova_scheduler.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_scheduler.json", "value": {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "d00e4198d95dede3f0b6ac351d57a982", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532438.59-172969567714476/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api_cron.json'}) => {"changed": true, "checksum": "a13a92b47f931e2e89d7e4bf5057a4307ab9cd45", "dest": "/var/lib/kolla/config_files/heat_api_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "e671c4783cc86fb2ad300fcd11b2f99b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 240, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532439.11-194735534229951/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/certs/neutron.crt'}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/private/neutron.key'}]}, 'key': u'/var/lib/kolla/config_files/neutron_dhcp.json'}) => {"changed": true, "checksum": "da289f102f641cdd0a02df41c443d7d8387741a5", "dest": "/var/lib/kolla/config_files/neutron_dhcp.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_dhcp.json", "value": {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}}, "md5sum": "c5975567082648a9da814c433c49f2d6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 875, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532439.6-274941885182416/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg', 'permissions': [{'owner': u'haproxy:haproxy', 'path': u'/var/lib/haproxy', 'recurse': True}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/certs/haproxy/*', 'optional': True, 'perm': u'0600'}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/private/haproxy/*', 'optional': True, 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/haproxy.json'}) => {"changed": true, "checksum": "0801385cb9292b3b6eb8440166435242bd90e288", "dest": "/var/lib/kolla/config_files/haproxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/haproxy.json", "value": {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}}, "md5sum": "a2742f7abd50bb0af0a4ba55b2f1f4ff", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 648, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532440.09-171438503430634/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_api_cron.json'}) => {"changed": true, "checksum": "c1a1552a71f4daefebff5234f9d8ba71f4c64d76", "dest": "/var/lib/kolla/config_files/nova_api_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "6b8ef057a2e5539eacd9f29fc4b94036", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 240, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532440.6-34603896706504/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_db_sync.json'}) => {"changed": true, "checksum": "a6d2eb62af2f11437c704d13adf72d498324ce2a", "dest": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "value": {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "d586f0c2ff043bece10efff986d635a3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 531, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532441.09-187482990649080/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-reaper /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_reaper.json'}) => {"changed": true, "checksum": "b061cf7478060add5d079aafaeae81b445251a8f", "dest": "/var/lib/kolla/config_files/swift_account_reaper.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_reaper.json", "value": {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "0f3bbe74ca95c8cca321ee32e2aff7d1", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 199, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532441.59-45745028811052/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/sahara-engine.json'}) => {"changed": true, "checksum": "b7397fff831b47db0b6111663d816a64a389cb25", "dest": "/var/lib/kolla/config_files/sahara-engine.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/sahara-engine.json", "value": {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "md5sum": "ac2c7a84fc46a1f1d128201ce5b67c2d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 360, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532442.07-44526562744065/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'redis:redis', 'path': u'/var/run/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/lib/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/log/redis', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/redis.json'}) => {"changed": true, "checksum": "f55df7b69a0f931e05451a529786b5e88a601055", "dest": "/var/lib/kolla/config_files/redis.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/redis.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}]}}, "md5sum": "492ea3593b2e7be5feb0367a9e76513d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 513, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532442.56-265069263882649/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-novncproxy --web /usr/share/novnc/ ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}, {'owner': u'root:nova', 'path': u'/etc/pki/tls/private/novnc_proxy.key'}]}, 'key': u'/var/lib/kolla/config_files/nova_vnc_proxy.json'}) => {"changed": true, "checksum": "1f78aded214a64080a8a0f8cd7c9467cbae8b727", "dest": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "value": {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "root:nova", "path": "/etc/pki/tls/private/novnc_proxy.key"}]}}, "md5sum": "86658122739f46ddeed8b58f4f89a67d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 471, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532443.04-270880927152799/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf', 'permissions': [{'owner': u'glance:glance', 'path': u'/var/lib/glance', 'recurse': True}, {'owner': u'glance:glance', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/glance_api.json'}) => {"changed": true, "checksum": "2a93405ac579e31c6e5732983f3d7dd8bed55b33", "dest": "/var/lib/kolla/config_files/glance_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/glance_api.json", "value": {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "30c5fe40dffc304e7edeab4019e96e92", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 556, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532443.53-39032587772659/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-auditor /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_auditor.json'}) => {"changed": true, "checksum": "739f6562d3ea24561c6d8bcf37041a9eac928257", "dest": "/var/lib/kolla/config_files/swift_container_auditor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_auditor.json", "value": {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "b63816c7c08aef58249d13b65b387da6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 204, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532444.02-168053767678778/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-panko/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log', 'permissions': [{'owner': u'root:ceilometer', 'path': u'/etc/panko', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_notification.json'}) => {"changed": true, "checksum": "98adef088b2ae2648ac88b812890957ec54eff13", "dest": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "value": {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}}, "md5sum": "4a38c9578181c292891f5f7bdb9f791b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 428, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532444.51-39729297658772/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_expirer.json'}) => {"changed": true, "checksum": "ebbb7ee6895cea2b9278f33e888881d3d3f1a68a", "dest": "/var/lib/kolla/config_files/swift_object_expirer.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_expirer.json", "value": {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "e4bf891d8ffc9a015be201a6ef0d5abc", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 199, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532445.01-17100899307223/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_central.json'}) => {"changed": true, "checksum": "53d52f7d52f0fb3da33de2c20414eb3248593fdd", "dest": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "2863f917d7ada51e9570fb53bb363eed", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 237, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532445.49-7739619850159/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api.json'}) => {"changed": true, "checksum": "454582321236a137f78205f328bae190c02f06b0", "dest": "/var/lib/kolla/config_files/heat_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "c04ac0476ee6639fadf252b0e9d9649b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532445.99-80062970752669/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf'}, 'key': u'/var/lib/kolla/config_files/swift_rsync.json'}) => {"changed": true, "checksum": "44a8f1a58092190d553d3f589cab9ae566f8dc81", "dest": "/var/lib/kolla/config_files/swift_rsync.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_rsync.json", "value": {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "886febadf691905adf0c129f3aa0197a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 200, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532446.48-33764591666153/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-server /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_server.json'}) => {"changed": true, "checksum": "279b64a7d6914d2a03c86c703f53e3d71b1daef1", "dest": "/var/lib/kolla/config_files/swift_account_server.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_server.json", "value": {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "b41d67c146c800142c5405fe5a0b332e", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 199, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532446.97-43720159379955/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_api_cron.json'}) => {"changed": true, "checksum": "06055a69fec2bc513b4c86ceb654a5fc29bd0866", "dest": "/var/lib/kolla/config_files/cinder_api_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "801aba1299d99bfd7e63f66ca7a4ba40", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532447.45-263353867714703/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_proxy.json'}) => {"changed": true, "checksum": "a0874b803c5238a4eeb12b1265d5d1db93c0d3d4", "dest": "/var/lib/kolla/config_files/swift_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_proxy.json", "value": {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "a38e4e3ae519b3b0824e19184e521b36", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 195, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532447.93-113280398830448/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-updater /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_updater.json'}) => {"changed": true, "checksum": "8dbfc3669a6d79fb30702be502ced7501500480a", "dest": "/var/lib/kolla/config_files/swift_container_updater.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_updater.json", "value": {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "a697319d04392dc572dff6236144571f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 204, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532448.43-220524987509043/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/xinetd -dontfork'}, 'key': u'/var/lib/kolla/config_files/clustercheck.json'}) => {"changed": true, "checksum": "3c87335a28b992f90769aea9ea62fb610f8236f1", "dest": "/var/lib/kolla/config_files/clustercheck.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/clustercheck.json", "value": {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "d74434e7b8bcaca0b227152346c13db8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 165, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532448.93-278222431296504/source", "state": "file", "uid": 0} > >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'mysql:mysql', 'path': u'/var/log/mysql', 'recurse': True}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/certs/mysql.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/private/mysql.key', 'optional': True, 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/mysql.json'}) => {"changed": true, "checksum": "b52f0d28ed1ac134c64994c08b3f2378e8dff494", "dest": "/var/lib/kolla/config_files/mysql.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/mysql.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}}, "md5sum": "4d15ed291dbe96e88b9a128b0e5c99e9", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 687, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532449.46-185647688078682/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_placement.json'}) => {"changed": true, "checksum": "d061b71e9106733354c297cbb7b327a22e476de5", "dest": "/var/lib/kolla/config_files/nova_placement.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_placement.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "941db485b7079f2f0e008e1bdff8e45f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532449.95-146458558580271/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/sahara-api.json'}) => {"changed": true, "checksum": "fd070eb1bdc97442fddc24f503fe5e3251b89e28", "dest": "/var/lib/kolla/config_files/sahara-api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/sahara-api.json", "value": {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "md5sum": "bd52668d37c227cc00c418bbe889ab90", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 357, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532450.44-234648132140043/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_api.json'}) => {"changed": true, "checksum": "f4177197cb07127689ae10a60020efa3a5e0d457", "dest": "/var/lib/kolla/config_files/aodh_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "582326e52a94260e71a4a19dc4d75191", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532450.92-161337242547406/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'keystone:keystone', 'path': u'/var/log/keystone', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/keystone_cron.json'}) => {"changed": true, "checksum": "815ba71e0584cb12e7d40f794603c6bfb1800626", "dest": "/var/lib/kolla/config_files/keystone_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/keystone_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}}, "md5sum": "b3b3bbd6499e09c424665311a5e66136", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 252, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532451.41-57303515367986/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/neutron_server_tls_proxy.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532451.88-222518124299550/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-replicator /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_replicator.json'}) => {"changed": true, "checksum": "659d25615392d81b2f6bc001067232495de4d6ac", "dest": "/var/lib/kolla/config_files/swift_object_replicator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_replicator.json", "value": {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "cdea8a372a87263d5fc44b482867a705", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 201, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532452.37-40047019496213/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-conductor ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_conductor.json'}) => {"changed": true, "checksum": "01a54792c74d0ebd057e8d0f44e6e8e619283e62", "dest": "/var/lib/kolla/config_files/nova_conductor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_conductor.json", "value": {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "ccbba0ad7a926ceca2bf858b8a9cc376", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532452.86-264381885578245/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api_cfn.json'}) => {"changed": true, "checksum": "454582321236a137f78205f328bae190c02f06b0", "dest": "/var/lib/kolla/config_files/heat_api_cfn.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_api_cfn.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "c04ac0476ee6639fadf252b0e9d9649b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532453.36-37388929886670/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-api-metadata ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_metadata.json'}) => {"changed": true, "checksum": "edb529183cc509ea82818edf4d88e3650b5ffc57", "dest": "/var/lib/kolla/config_files/nova_metadata.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_metadata.json", "value": {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "45129bd8b5b9aef067edb558a9fb2c68", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 249, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532453.85-213545144632144/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": true, "checksum": "bd1c4f0459f65e7f67a969a89c74a8b8cdcfd9f8", "dest": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "md5sum": "3599cf6b814b7c628c2887996ca46138", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 261, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532454.32-174206185554490/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'rabbitmq:rabbitmq', 'path': u'/var/lib/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/var/log/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/certs/rabbitmq.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/private/rabbitmq.key', 'optional': True, 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/rabbitmq.json'}) => {"changed": true, "checksum": "205ddacf194881a04c54779e3049b3c59ef6c4af", "dest": "/var/lib/kolla/config_files/rabbitmq.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/rabbitmq.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}}, "md5sum": "1097dade2a2355fd51207668004d093d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 792, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532454.86-55275296373073/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-consoleauth ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_consoleauth.json'}) => {"changed": true, "checksum": "a960878859377dfae6334d9b7eaa9f554ab31798", "dest": "/var/lib/kolla/config_files/nova_consoleauth.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_consoleauth.json", "value": {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "2a66fc646aae3e5913e0598ccef3881f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 248, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532455.33-14986333176035/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-updater /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_updater.json'}) => {"changed": true, "checksum": "4f7a34f38afe301f885e25eb10225c461ab1d0b1", "dest": "/var/lib/kolla/config_files/swift_object_updater.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_updater.json", "value": {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "71a7e788486d505cfec645da0ac337cd", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 198, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532455.83-11670353982825/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_api.json'}) => {"changed": true, "checksum": "5a73d3b7ef652341120c9298683d3a26f3fb668b", "dest": "/var/lib/kolla/config_files/neutron_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_api.json", "value": {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "md5sum": "c48346aa3f8c096826ebab378db9dfb9", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 549, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532456.29-134517380636126/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_scheduler.json'}) => {"changed": true, "checksum": "9ec49193a63036ecf32a1479eabdac05dcab06e0", "dest": "/var/lib/kolla/config_files/cinder_scheduler.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_scheduler.json", "value": {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "93e9da0d08550be0ed30576cefdfbfbb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 340, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532456.75-109108542169566/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-metricd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_metricd.json'}) => {"changed": true, "checksum": "c8763a8c16702042afe553b54212340d800e1509", "dest": "/var/lib/kolla/config_files/gnocchi_metricd.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_metricd.json", "value": {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "db9bd25aa2fcd2845d442869e986e7d8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 471, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532457.2-193489475213268/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_metadata_agent.json'}) => {"changed": true, "checksum": "fe01b9d48d08f239bbf9acf7e2a1492397180c8e", "dest": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "value": {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "md5sum": "a26f6acfc823d6e2e5b34367b859c8fa", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 617, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532457.67-12688562000948/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-replicator /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_replicator.json'}) => {"changed": true, "checksum": "a418eddca731078cfd8fe2fda7ee64d9ffaf7dda", "dest": "/var/lib/kolla/config_files/swift_container_replicator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_replicator.json", "value": {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "930bbe0f8c13b55f664fb3a89dfa1613", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 207, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532458.12-76321055448516/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_engine.json'}) => {"changed": true, "checksum": "fe3989178a2ea434bae6dfd64b04423e3ea005bc", "dest": "/var/lib/kolla/config_files/heat_engine.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_engine.json", "value": {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "aee05ebc54399dde3dfc3577c3431a92", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 322, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532458.58-100672316172527/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_api.json'}) => {"changed": true, "checksum": "d061b71e9106733354c297cbb7b327a22e476de5", "dest": "/var/lib/kolla/config_files/nova_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "941db485b7079f2f0e008e1bdff8e45f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532459.04-110924715235960/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-server /etc/swift/object-server.conf', 'permissions': [{'owner': u'swift:swift', 'path': u'/var/cache/swift', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/swift_object_server.json'}) => {"changed": true, "checksum": "460cdcfbcfac45a30b03df89ac84d2f34db64d72", "dest": "/var/lib/kolla/config_files/swift_object_server.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_server.json", "value": {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}}, "md5sum": "b00c233fd2cd32c68e429e42918b8245", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 285, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532459.51-192566930687404/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'stunnel /etc/stunnel/stunnel.conf', 'permissions': [{'owner': u'root:root', 'path': u'/etc/pki/tls/certs/redis.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'root:root', 'path': u'/etc/pki/tls/private/redis.key', 'optional': True, 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/redis_tls_proxy.json'}) => {"changed": true, "checksum": "18f1fbea70cdce00454c810ec3ea8fd4b3a0067f", "dest": "/var/lib/kolla/config_files/redis_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/redis_tls_proxy.json", "value": {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"optional": true, "owner": "root:root", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "root:root", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}}, "md5sum": "32dd4fdbe1a9dd2debf43f33dfc36d08", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 515, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532459.99-114334448514411/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_api.json'}) => {"changed": true, "checksum": "39f33531116fbcba7a5d9c1cbbc32f4af5e6b981", "dest": "/var/lib/kolla/config_files/gnocchi_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "5e924ffe736d942bf904a791bf5b5af2", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 475, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532460.49-223654457161631/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_api.json'}) => {"changed": true, "checksum": "7f36445e4c6eb403ce919ca3adee771d4cb3bcce", "dest": "/var/lib/kolla/config_files/cinder_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "bb3e2e5741eb3e5b6c53da835e66d00d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 256, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532460.95-9850093031402/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_volume.json'}) => {"changed": true, "checksum": "e800a0e1c86f8fa7a41efbf24ce38f48a458ba51", "dest": "/var/lib/kolla/config_files/cinder_volume.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_volume.json", "value": {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "a85ec43ba623807ac022c04663fa68f5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 579, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532461.42-111061316753545/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'panko:panko', 'path': u'/var/log/panko', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/panko_api.json'}) => {"changed": true, "checksum": "2db8f01174b9c2aa3a180add472b54891aed5cd6", "dest": "/var/lib/kolla/config_files/panko_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/panko_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}}, "md5sum": "7d9530934c938a4c96f71797957f7ca8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 253, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532461.88-171957381841829/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-auditor /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_auditor.json'}) => {"changed": true, "checksum": "fbcdad9219733b81ad969426553906c1a8648897", "dest": "/var/lib/kolla/config_files/swift_object_auditor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_auditor.json", "value": {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "45f7348541b64a76aec07477ea1d7358", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 198, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532462.32-185161786287394/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_l3_agent.json'}) => {"changed": true, "checksum": "cd233477dc9defd8028ac1a8fe736b8c9fcde9f8", "dest": "/var/lib/kolla/config_files/neutron_l3_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_l3_agent.json", "value": {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "md5sum": "b47a8dc2601f0e1c404b9009d1c99c32", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 634, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532462.78-203153842942786/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-listener', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_listener.json'}) => {"changed": true, "checksum": "a7135286aba5eb111dc77c913fc1f7dc0977e783", "dest": "/var/lib/kolla/config_files/aodh_listener.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_listener.json", "value": {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "ff2b7ae2bb8061a36a8223f5c34a970b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 244, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532463.3-53404328909877/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-server /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_server.json'}) => {"changed": true, "checksum": "1f5cc060becbca7be3515f39537993b91e109a6d", "dest": "/var/lib/kolla/config_files/swift_container_server.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_server.json", "value": {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "59a9944c2c3c07fec0293d2efd7d8082", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 203, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532463.79-138613907911164/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-evaluator', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_evaluator.json'}) => {"changed": true, "checksum": "596ee1b7f45471d04a0bc3d985f82ad722631b98", "dest": "/var/lib/kolla/config_files/aodh_evaluator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_evaluator.json", "value": {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "94c5432632bf2acca69de0063414183b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 245, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532464.27-99610599381868/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/glance_api_tls_proxy.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532464.76-162635837351387/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': u'/var/lib/kolla/config_files/iscsid.json'}) => {"changed": true, "checksum": "40f9ceb4dd2fc8e9c51bf5152a0fa8e1d16d9137", "dest": "/var/lib/kolla/config_files/iscsid.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "md5sum": "9cd3c2dc0153b127d70141dadfabd12c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 175, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532465.24-167229308382403/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-statsd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_statsd.json'}) => {"changed": true, "checksum": "1a38774f0fed561a8f1ad8c7f0a976a71a7f7008", "dest": "/var/lib/kolla/config_files/gnocchi_statsd.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_statsd.json", "value": {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "b98425b2f26d4e30448a72685b1f89ad", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 470, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532465.73-221436527966377/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'apache:apache', 'path': u'/var/log/horizon/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/etc/openstack-dashboard/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/', 'recurse': False}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/', 'recurse': False}]}, 'key': u'/var/lib/kolla/config_files/horizon.json'}) => {"changed": true, "checksum": "fc55910103403d0bb92e62e940dbd536aff43f84", "dest": "/var/lib/kolla/config_files/horizon.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/horizon.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}}, "md5sum": "77504b6ea1f544f3c70dbc4115bfc354", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 587, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532466.23-140924817204702/source", "state": "file", "uid": 0} > >TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >Friday 21 September 2018 08:21:06 -0400 (0:00:32.113) 0:04:29.311 ****** > [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >TASK [Write docker-puppet-tasks json files] ************************************ >Friday 21 September 2018 08:21:06 -0400 (0:00:00.102) 0:04:29.414 ****** >changed: [controller-0] => (item={'value': [{'puppet_tags': u'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain', 'config_volume': u'keystone_init_tasks', 'step_config': u'include ::tripleo::profile::base::keystone', 'config_image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1'}], 'key': u'step_3'}) => {"changed": true, "checksum": "7f972a464871ecaf99a8c646963e44a31a095a8a", "dest": "/var/lib/docker-puppet/docker-puppet-tasks3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]}, "md5sum": "338f52192c3011cb4c8808f8ac45517d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 397, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532466.99-211645456830457/source", "state": "file", "uid": 0} >changed: [controller-0] => (item={'value': [{'puppet_tags': u'cinder_config,cinder_type,file,concat,file_line', 'config_volume': u'cinder_init_tasks', 'step_config': u'include ::tripleo::profile::base::cinder::api', 'config_image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1', 'volumes': [u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro']}], 'key': u'step_4'}) => {"changed": true, "checksum": "fc092b06b8d6fdc6d18320ab604ab9a6ebc1e1ae", "dest": "/var/lib/docker-puppet/docker-puppet-tasks4.json", "gid": 0, "group": "root", "item": {"key": "step_4", "value": [{"config_image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", "config_volume": "cinder_init_tasks", "puppet_tags": "cinder_config,cinder_type,file,concat,file_line", "step_config": "include ::tripleo::profile::base::cinder::api", "volumes": ["/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro"]}]}, "md5sum": "3040fd99d20b3b080800665727028f0c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 321, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532467.49-169619612963171/source", "state": "file", "uid": 0} > >TASK [Set host puppet debugging fact string] *********************************** >Friday 21 September 2018 08:21:07 -0400 (0:00:01.061) 0:04:30.476 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write the config_step hieradata] ***************************************** >Friday 21 September 2018 08:21:08 -0400 (0:00:00.096) 0:04:30.573 ****** >changed: [controller-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532468.2-187732312450159/source", "state": "file", "uid": 0} >changed: [compute-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532468.24-13375660670658/source", "state": "file", "uid": 0} >changed: [ceph-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537532468.26-113593972149994/source", "state": "file", "uid": 0} > >TASK [Run puppet host configuration for step 1] ******************************** >Friday 21 September 2018 08:21:08 -0400 (0:00:00.685) 0:04:31.258 ****** >changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} > >changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} > > >TASK [Debug output for task which failed: Run puppet host configuration for step 1] *** >Friday 21 September 2018 08:22:37 -0400 (0:01:28.271) 0:05:59.530 ****** >ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.29 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}c1d92fa159fef3afd721be5f86af886d'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/ensure: created", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/content: content changed '{md5}e4ca381035a34b7a852184cc0dd89baa' to '{md5}c79354b8dbee09e62bbc3fb544853283'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}57d1d42823e0349b20bf232bc8b28408'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/ensure: created", > "Notice: /Stage[main]/Pacemaker::Service/Service[pcsd]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/password: changed password", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/groups: groups changed '' to ['haclient']", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/returns: executed successfully", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/owner: owner changed 'root' to 'hacluster'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/group: group changed 'root' to 'haclient'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/mode: mode changed '0755' to '0750'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/content: content changed '{md5}3cf7e9433931ea22b46d65cde0922d16' to '{md5}c3147d7557e35ca11703708df4a6bdfa'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/mode: mode changed '0400' to '0640'", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/returns: executed successfully", > "Notice: /Stage[main]/Pacemaker::Service/Service[corosync]/enable: enable changed 'false' to 'true'", > "Notice: /Stage[main]/Pacemaker::Service/Service[pacemaker]/enable: enable changed 'false' to 'true'", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: executed successfully", > "Notice: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Pacemaker::Stonith/Pacemaker::Property[Disable STONITH]/Pcmk_property[property--stonith-enabled]/ensure: created", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 76.61 seconds", > "Changes:", > " Total: 169", > "Events:", > " Success: 169", > "Resources:", > " Changed: 165", > " Out of sync: 165", > " Total: 215", > " Restarted: 5", > "Time:", > " Concat file: 0.00", > " Schedule: 0.00", > " Cron: 0.00", > " Anchor: 0.00", > " File line: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " User: 0.04", > " Sysctl: 0.07", > " Sysctl runtime: 0.21", > " File: 0.22", > " Package: 0.43", > " Pcmk property: 1.12", > " Firewall: 14.97", > " Last run: 1537532556", > " Service: 2.73", > " Config retrieval: 3.90", > " Exec: 53.70", > " Filebucket: 0.00", > " Total: 77.40", > " Concat fragment: 0.00", > "Version:", > " Config: 1537532475", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.99 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}c1d92fa159fef3afd721be5f86af886d'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/content: content changed '{md5}e4ca381035a34b7a852184cc0dd89baa' to '{md5}c79354b8dbee09e62bbc3fb544853283'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Tuned/Exec[tuned-adm]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}57d1d42823e0349b20bf232bc8b28408'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_libvirt]/Tripleo::Firewall::Rule[200 nova_libvirt]/Firewall[200 nova_libvirt ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_libvirt]/Tripleo::Firewall::Rule[200 nova_libvirt]/Firewall[200 nova_libvirt ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_migration_target]/Tripleo::Firewall::Rule[113 nova_migration_target]/Firewall[113 nova_migration_target ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_migration_target]/Tripleo::Firewall::Rule[113 nova_migration_target]/Firewall[113 nova_migration_target ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 7.86 seconds", > "Changes:", > " Total: 99", > "Events:", > " Success: 99", > "Resources:", > " Total: 140", > " Restarted: 3", > " Out of sync: 99", > " Changed: 99", > "Time:", > " Concat file: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " Sysctl: 0.06", > " File: 0.13", > " Sysctl runtime: 0.18", > " Package: 0.22", > " Service: 1.24", > " Last run: 1537532485", > " Exec: 2.23", > " Config retrieval: 2.26", > " Firewall: 2.42", > " Filebucket: 0.00", > " Total: 8.77", > " Concat fragment: 0.00", > "Version:", > " Config: 1537532475", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 2.13 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}c1d92fa159fef3afd721be5f86af886d'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/content: content changed '{md5}e4ca381035a34b7a852184cc0dd89baa' to '{md5}c79354b8dbee09e62bbc3fb544853283'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}57d1d42823e0349b20bf232bc8b28408'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_osd]/Tripleo::Firewall::Rule[111 ceph_osd]/Firewall[111 ceph_osd ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_osd]/Tripleo::Firewall::Rule[111 ceph_osd]/Firewall[111 ceph_osd ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 7.12 seconds", > "Changes:", > " Total: 92", > "Events:", > " Success: 92", > "Resources:", > " Total: 134", > " Restarted: 3", > " Out of sync: 92", > " Changed: 92", > "Time:", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Package manifest: 0.01", > " Augeas: 0.02", > " Sysctl: 0.06", > " File: 0.15", > " Sysctl runtime: 0.18", > " Package: 0.24", > " Service: 1.40", > " Firewall: 1.66", > " Exec: 1.98", > " Last run: 1537532485", > " Config retrieval: 2.51", > " Total: 8.21", > " Filebucket: 0.00", > "Version:", > " Config: 1537532475", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} > >TASK [Run docker-puppet tasks (generate config) during step 1] ***************** >Friday 21 September 2018 08:22:38 -0400 (0:00:01.016) 0:06:00.547 ****** >ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > >ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > > >TASK [Debug output for task which failed: Run docker-puppet tasks (generate config) during step 1] *** >Friday 21 September 2018 08:25:40 -0400 (0:03:01.992) 0:09:02.539 ****** >ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-09-21 12:22:38,421 INFO: 16436 -- Running docker-puppet", > "2018-09-21 12:22:38,421 DEBUG: 16436 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-09-21 12:22:38,421 DEBUG: 16436 -- config_volume crond", > "2018-09-21 12:22:38,422 DEBUG: 16436 -- puppet_tags ", > "2018-09-21 12:22:38,422 DEBUG: 16436 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-09-21 12:22:38,422 DEBUG: 16436 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", > "2018-09-21 12:22:38,422 DEBUG: 16436 -- volumes []", > "2018-09-21 12:22:38,422 DEBUG: 16436 -- Adding new service", > "2018-09-21 12:22:38,422 INFO: 16436 -- Service compilation completed.", > "2018-09-21 12:22:38,423 DEBUG: 16436 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1', []]", > "2018-09-21 12:22:38,423 INFO: 16436 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-09-21 12:22:38,435 INFO: 16437 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", > "2018-09-21 12:22:38,435 DEBUG: 16437 -- config_volume crond", > "2018-09-21 12:22:38,435 DEBUG: 16437 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-09-21 12:22:38,435 DEBUG: 16437 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-09-21 12:22:38,436 DEBUG: 16437 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", > "2018-09-21 12:22:38,436 DEBUG: 16437 -- volumes []", > "2018-09-21 12:22:38,437 INFO: 16437 -- Removing container: docker-puppet-crond", > "2018-09-21 12:22:38,524 INFO: 16437 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", > "2018-09-21 12:22:53,214 DEBUG: 16437 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "378837c0e24a: Pulling fs layer", > "e17262bc2341: Pulling fs layer", > "86a0e618a180: Pulling fs layer", > "6e472b601f7f: Pulling fs layer", > "6e472b601f7f: Waiting", > "e17262bc2341: Verifying Checksum", > "e17262bc2341: Download complete", > "6e472b601f7f: Verifying Checksum", > "6e472b601f7f: Download complete", > "378837c0e24a: Verifying Checksum", > "378837c0e24a: Download complete", > "86a0e618a180: Download complete", > "378837c0e24a: Pull complete", > "e17262bc2341: Pull complete", > "86a0e618a180: Pull complete", > "6e472b601f7f: Pull complete", > "Digest: sha256:a274bd4ffbb72a4ab1b5f789df588f93c632040cf5a8c84ccc51059d05f38637", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", > "", > "2018-09-21 12:22:53,218 DEBUG: 16437 -- NET_HOST enabled", > "2018-09-21 12:22:53,218 DEBUG: 16437 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=ceph-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmptx_nEr:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", > "2018-09-21 12:23:02,807 DEBUG: 16437 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 0.60 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}f121ac457cb6e71964450c8cbc0a2431'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.04 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Changed: 2", > " Out of sync: 2", > " Skipped: 7", > " Total: 9", > "Time:", > " File: 0.00", > " Cron: 0.01", > " Config retrieval: 0.70", > " Total: 0.71", > " Last run: 1537532581", > "Version:", > " Config: 1537532580", > " Puppet: 4.8.2", > "Gathering files modified after 2018-09-21 12:22:53.475557709 +0000", > "2018-09-21 12:23:02,807 DEBUG: 16437 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=ceph-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:22:53.475557709 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ EXCLUDE='--exclude=*/etc/swift/backups/* --exclude=*/etc/swift/*.ring.gz --exclude=*/etc/swift/*.builder --exclude=*/etc/libvirt/passwd.db'", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/crond", > "+ sed '/^#.*HEADER.*/d'", > "+ tar xO", > "+ md5sum", > "tar: Removing leading `/' from member names", > "+ awk '{print $1}'", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-09-21 12:23:02,807 INFO: 16437 -- Removing container: docker-puppet-crond", > "2018-09-21 12:23:02,848 DEBUG: 16437 -- docker-puppet-crond", > "2018-09-21 12:23:02,849 INFO: 16437 -- Finished processing puppet configs for crond", > "2018-09-21 12:23:02,850 DEBUG: 16436 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-09-21 12:23:02,850 DEBUG: 16436 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-09-21 12:23:02,853 DEBUG: 16436 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-09-21 12:23:02,853 DEBUG: 16436 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-09-21 12:23:02,853 DEBUG: 16436 -- Updating config hash for logrotate_crond, config_volume=crond hash=6f2a5e23a896d70ebbc2c66d87cd9266" > ] >} >ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-09-21 12:22:38,376 INFO: 18570 -- Running docker-puppet", > "2018-09-21 12:22:38,376 DEBUG: 18570 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-09-21 12:22:38,376 DEBUG: 18570 -- config_volume ceilometer", > "2018-09-21 12:22:38,377 DEBUG: 18570 -- puppet_tags ceilometer_config", > "2018-09-21 12:22:38,377 DEBUG: 18570 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "", > "2018-09-21 12:22:38,377 DEBUG: 18570 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1", > "2018-09-21 12:22:38,377 DEBUG: 18570 -- volumes []", > "2018-09-21 12:22:38,377 DEBUG: 18570 -- Adding new service", > "2018-09-21 12:22:38,377 DEBUG: 18570 -- config_volume neutron", > "2018-09-21 12:22:38,377 DEBUG: 18570 -- puppet_tags neutron_plugin_ml2", > "2018-09-21 12:22:38,377 DEBUG: 18570 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "2018-09-21 12:22:38,377 DEBUG: 18570 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", > "2018-09-21 12:22:38,377 DEBUG: 18570 -- puppet_tags neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-09-21 12:22:38,377 DEBUG: 18570 -- manifest include ::tripleo::profile::base::neutron::ovs", > "2018-09-21 12:22:38,377 DEBUG: 18570 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-09-21 12:22:38,377 DEBUG: 18570 -- Existing service, appending puppet tags and manifest", > "2018-09-21 12:22:38,378 DEBUG: 18570 -- config_volume iscsid", > "2018-09-21 12:22:38,378 DEBUG: 18570 -- puppet_tags iscsid_config", > "2018-09-21 12:22:38,378 DEBUG: 18570 -- manifest include ::tripleo::profile::base::iscsid", > "2018-09-21 12:22:38,378 DEBUG: 18570 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1", > "2018-09-21 12:22:38,378 DEBUG: 18570 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-09-21 12:22:38,378 DEBUG: 18570 -- Adding new service", > "2018-09-21 12:22:38,378 DEBUG: 18570 -- config_volume nova_libvirt", > "2018-09-21 12:22:38,378 DEBUG: 18570 -- puppet_tags nova_config,nova_paste_api_ini", > "2018-09-21 12:22:38,378 DEBUG: 18570 -- manifest # TODO(emilien): figure how to deal with libvirt profile.", > "# We'll probably treat it like we do with Neutron plugins.", > "# Until then, just include it in the default nova-compute role.", > "include tripleo::profile::base::nova::compute::libvirt", > "include ::tripleo::profile::base::database::mysql::client", > "2018-09-21 12:22:38,378 DEBUG: 18570 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1", > "2018-09-21 12:22:38,378 DEBUG: 18570 -- volumes []", > "2018-09-21 12:22:38,378 DEBUG: 18570 -- puppet_tags libvirtd_config,nova_config,file,libvirt_tls_password", > "2018-09-21 12:22:38,378 DEBUG: 18570 -- manifest include tripleo::profile::base::nova::libvirt", > "2018-09-21 12:22:38,378 DEBUG: 18570 -- Existing service, appending puppet tags and manifest", > "2018-09-21 12:22:38,379 DEBUG: 18570 -- puppet_tags ", > "2018-09-21 12:22:38,379 DEBUG: 18570 -- manifest include ::tripleo::profile::base::sshd", > "include tripleo::profile::base::nova::migration::target", > "2018-09-21 12:22:38,379 DEBUG: 18570 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1", > "2018-09-21 12:22:38,379 DEBUG: 18570 -- volumes []", > "2018-09-21 12:22:38,379 DEBUG: 18570 -- Existing service, appending puppet tags and manifest", > "2018-09-21 12:22:38,379 DEBUG: 18570 -- config_volume crond", > "2018-09-21 12:22:38,379 DEBUG: 18570 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-09-21 12:22:38,379 DEBUG: 18570 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", > "2018-09-21 12:22:38,379 DEBUG: 18570 -- Adding new service", > "2018-09-21 12:22:38,379 INFO: 18570 -- Service compilation completed.", > "2018-09-21 12:22:38,380 DEBUG: 18570 -- - [u'ceilometer', u'file,file_line,concat,augeas,cron,ceilometer_config', u'include ::tripleo::profile::base::ceilometer::agent::polling\\n', u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1', []]", > "2018-09-21 12:22:38,380 DEBUG: 18570 -- - [u'nova_libvirt', u'file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password', u\"# TODO(emilien): figure how to deal with libvirt profile.\\n# We'll probably treat it like we do with Neutron plugins.\\n# Until then, just include it in the default nova-compute role.\\ninclude tripleo::profile::base::nova::compute::libvirt\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::libvirt\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::sshd\\ninclude tripleo::profile::base::nova::migration::target\", u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1', []]", > "2018-09-21 12:22:38,380 DEBUG: 18570 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1', []]", > "2018-09-21 12:22:38,380 DEBUG: 18570 -- - [u'neutron', u'file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2', u'include ::tripleo::profile::base::neutron::plugins::ml2\\n\\ninclude ::tripleo::profile::base::neutron::ovs\\n', u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1', [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']]", > "2018-09-21 12:22:38,380 DEBUG: 18570 -- - [u'iscsid', u'file,file_line,concat,augeas,cron,iscsid_config', u'include ::tripleo::profile::base::iscsid', u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1', [u'/etc/iscsi:/etc/iscsi']]", > "2018-09-21 12:22:38,380 INFO: 18570 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-09-21 12:22:38,391 INFO: 18571 -- Starting configuration of ceilometer using image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1", > "2018-09-21 12:22:38,392 DEBUG: 18571 -- config_volume ceilometer", > "2018-09-21 12:22:38,392 DEBUG: 18571 -- puppet_tags file,file_line,concat,augeas,cron,ceilometer_config", > "2018-09-21 12:22:38,391 INFO: 18572 -- Starting configuration of nova_libvirt using image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1", > "2018-09-21 12:22:38,392 DEBUG: 18571 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "2018-09-21 12:22:38,392 DEBUG: 18571 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1", > "2018-09-21 12:22:38,392 DEBUG: 18572 -- config_volume nova_libvirt", > "2018-09-21 12:22:38,392 DEBUG: 18571 -- volumes []", > "2018-09-21 12:22:38,392 DEBUG: 18572 -- puppet_tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password", > "2018-09-21 12:22:38,392 DEBUG: 18572 -- manifest # TODO(emilien): figure how to deal with libvirt profile.", > "include tripleo::profile::base::nova::libvirt", > "include ::tripleo::profile::base::sshd", > "2018-09-21 12:22:38,392 DEBUG: 18572 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1", > "2018-09-21 12:22:38,392 DEBUG: 18572 -- volumes []", > "2018-09-21 12:22:38,393 INFO: 18571 -- Removing container: docker-puppet-ceilometer", > "2018-09-21 12:22:38,393 INFO: 18572 -- Removing container: docker-puppet-nova_libvirt", > "2018-09-21 12:22:38,394 INFO: 18573 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", > "2018-09-21 12:22:38,394 DEBUG: 18573 -- config_volume crond", > "2018-09-21 12:22:38,394 DEBUG: 18573 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-09-21 12:22:38,394 DEBUG: 18573 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-09-21 12:22:38,394 DEBUG: 18573 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", > "2018-09-21 12:22:38,394 DEBUG: 18573 -- volumes []", > "2018-09-21 12:22:38,395 INFO: 18573 -- Removing container: docker-puppet-crond", > "2018-09-21 12:22:38,477 INFO: 18573 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", > "2018-09-21 12:22:38,502 INFO: 18571 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1", > "2018-09-21 12:22:38,508 INFO: 18572 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1", > "2018-09-21 12:22:53,303 DEBUG: 18573 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "378837c0e24a: Pulling fs layer", > "e17262bc2341: Pulling fs layer", > "86a0e618a180: Pulling fs layer", > "6e472b601f7f: Pulling fs layer", > "e17262bc2341: Verifying Checksum", > "e17262bc2341: Download complete", > "6e472b601f7f: Verifying Checksum", > "6e472b601f7f: Download complete", > "378837c0e24a: Verifying Checksum", > "378837c0e24a: Download complete", > "86a0e618a180: Verifying Checksum", > "86a0e618a180: Download complete", > "378837c0e24a: Pull complete", > "e17262bc2341: Pull complete", > "86a0e618a180: Pull complete", > "6e472b601f7f: Pull complete", > "Digest: sha256:a274bd4ffbb72a4ab1b5f789df588f93c632040cf5a8c84ccc51059d05f38637", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", > "2018-09-21 12:22:53,306 DEBUG: 18573 -- NET_HOST enabled", > "2018-09-21 12:22:53,306 DEBUG: 18573 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpSmS1EM:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", > "2018-09-21 12:22:59,299 DEBUG: 18571 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-central ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-central", > "dfa58d50e0a3: Pulling fs layer", > "39327dc96373: Pulling fs layer", > "7c3fd050f245: Pulling fs layer", > "dfa58d50e0a3: Waiting", > "39327dc96373: Waiting", > "7c3fd050f245: Waiting", > "dfa58d50e0a3: Verifying Checksum", > "dfa58d50e0a3: Download complete", > "39327dc96373: Verifying Checksum", > "39327dc96373: Download complete", > "7c3fd050f245: Verifying Checksum", > "7c3fd050f245: Download complete", > "dfa58d50e0a3: Pull complete", > "39327dc96373: Pull complete", > "7c3fd050f245: Pull complete", > "Digest: sha256:a31d498553693f09b6d3e9a981237ed4c4a1f12a50cbb87cfadb74c9b99a5f63", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1", > "2018-09-21 12:22:59,304 DEBUG: 18571 -- NET_HOST enabled", > "2018-09-21 12:22:59,304 DEBUG: 18571 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-ceilometer --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config --env NAME=ceilometer --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpHwEiBJ:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1", > "2018-09-21 12:23:03,060 DEBUG: 18573 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 0.65 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}f121ac457cb6e71964450c8cbc0a2431'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.04 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Changed: 2", > " Out of sync: 2", > " Skipped: 7", > " Total: 9", > "Time:", > " File: 0.01", > " Cron: 0.01", > " Config retrieval: 0.77", > " Total: 0.78", > " Last run: 1537532581", > "Version:", > " Config: 1537532580", > " Puppet: 4.8.2", > "Gathering files modified after 2018-09-21 12:22:53.658292870 +0000", > "2018-09-21 12:23:03,060 DEBUG: 18573 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=compute-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:22:53.658292870 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ EXCLUDE='--exclude=*/etc/swift/backups/* --exclude=*/etc/swift/*.ring.gz --exclude=*/etc/swift/*.builder --exclude=*/etc/libvirt/passwd.db'", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/crond", > "+ md5sum", > "+ tar xO", > "tar: Removing leading `/' from member names", > "+ awk '{print $1}'", > "+ sed '/^#.*HEADER.*/d'", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-09-21 12:23:03,060 INFO: 18573 -- Removing container: docker-puppet-crond", > "2018-09-21 12:23:03,119 DEBUG: 18573 -- docker-puppet-crond", > "2018-09-21 12:23:03,119 INFO: 18573 -- Finished processing puppet configs for crond", > "2018-09-21 12:23:03,121 INFO: 18573 -- Starting configuration of neutron using image 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", > "2018-09-21 12:23:03,121 DEBUG: 18573 -- config_volume neutron", > "2018-09-21 12:23:03,121 DEBUG: 18573 -- puppet_tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-09-21 12:23:03,121 DEBUG: 18573 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "include ::tripleo::profile::base::neutron::ovs", > "2018-09-21 12:23:03,121 DEBUG: 18573 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", > "2018-09-21 12:23:03,121 DEBUG: 18573 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-09-21 12:23:03,123 INFO: 18573 -- Removing container: docker-puppet-neutron", > "2018-09-21 12:23:03,224 INFO: 18573 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", > "2018-09-21 12:23:09,710 DEBUG: 18571 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.11 seconds", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/interface]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created", > "Notice: Applied catalog in 0.66 seconds", > " Total: 24", > " Success: 24", > " Total: 139", > " Skipped: 22", > " Out of sync: 24", > " Changed: 24", > " Ceilometer config: 0.55", > " Config retrieval: 1.28", > " Total: 1.84", > " Last run: 1537532588", > " Resources: 0.00", > " Config: 1537532586", > "Gathering files modified after 2018-09-21 12:22:59.523405757 +0000", > "2018-09-21 12:23:09,710 DEBUG: 18571 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,ceilometer_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,ceilometer_config'", > "+ origin_of_time=/var/lib/config-data/ceilometer.origin_of_time", > "+ touch /var/lib/config-data/ceilometer.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,ceilometer_config /etc/config.pp", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/config.pp\", 35]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer.pp\", 111]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > "Warning: Scope(Class[Ceilometer::Dispatcher::Gnocchi]): The class ceilometer::dispatcher::gnocchi is deprecated. All its", > " options must be set as url parameters in", > " ceilometer::agent::notification::pipeline_publishers. Depending of the used", > " Gnocchi version their might be ignored.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/ceilometer", > "++ stat -c %y /var/lib/config-data/ceilometer.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:22:59.523405757 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/ceilometer", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/ceilometer", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/ceilometer.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/ceilometer", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/ceilometer --mtime=1970-01-01", > "2018-09-21 12:23:09,710 INFO: 18571 -- Removing container: docker-puppet-ceilometer", > "2018-09-21 12:23:09,762 DEBUG: 18571 -- docker-puppet-ceilometer", > "2018-09-21 12:23:09,762 INFO: 18571 -- Finished processing puppet configs for ceilometer", > "2018-09-21 12:23:09,762 INFO: 18571 -- Starting configuration of iscsid using image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1", > "2018-09-21 12:23:09,762 DEBUG: 18571 -- config_volume iscsid", > "2018-09-21 12:23:09,762 DEBUG: 18571 -- puppet_tags file,file_line,concat,augeas,cron,iscsid_config", > "2018-09-21 12:23:09,762 DEBUG: 18571 -- manifest include ::tripleo::profile::base::iscsid", > "2018-09-21 12:23:09,762 DEBUG: 18571 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1", > "2018-09-21 12:23:09,763 DEBUG: 18571 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-09-21 12:23:09,764 INFO: 18571 -- Removing container: docker-puppet-iscsid", > "2018-09-21 12:23:09,857 INFO: 18571 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1", > "2018-09-21 12:23:10,770 DEBUG: 18571 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-iscsid ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-iscsid", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "86a0e618a180: Already exists", > "a1f7d1c27dcc: Pulling fs layer", > "a1f7d1c27dcc: Verifying Checksum", > "a1f7d1c27dcc: Download complete", > "a1f7d1c27dcc: Pull complete", > "Digest: sha256:b011eb80fb9a37540d4134254671d70667eedc22278e2d3d9d0e5bd1c8c9316f", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1", > "2018-09-21 12:23:10,773 DEBUG: 18571 -- NET_HOST enabled", > "2018-09-21 12:23:10,773 DEBUG: 18571 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-iscsid --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpiWPgC_:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/iscsi:/etc/iscsi --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1", > "2018-09-21 12:23:11,268 DEBUG: 18573 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight", > "dfa58d50e0a3: Already exists", > "763394b9c1e7: Pulling fs layer", > "055e8a682563: Pulling fs layer", > "5fd420ae7fff: Pulling fs layer", > "5fd420ae7fff: Verifying Checksum", > "5fd420ae7fff: Download complete", > "055e8a682563: Verifying Checksum", > "055e8a682563: Download complete", > "763394b9c1e7: Verifying Checksum", > "763394b9c1e7: Download complete", > "763394b9c1e7: Pull complete", > "055e8a682563: Pull complete", > "5fd420ae7fff: Pull complete", > "Digest: sha256:bc9fc3e332047433fa698e663155925985c4c0834382d5fcd1ec6adffc77277c", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", > "2018-09-21 12:23:11,271 DEBUG: 18573 -- NET_HOST enabled", > "2018-09-21 12:23:11,271 DEBUG: 18573 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-neutron --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 --env NAME=neutron --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpWcIC5T:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", > "2018-09-21 12:23:15,917 DEBUG: 18572 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-compute ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-compute", > "2d54cceaa5bd: Pulling fs layer", > "43cc12582395: Pulling fs layer", > "2d54cceaa5bd: Waiting", > "43cc12582395: Waiting", > "2d54cceaa5bd: Verifying Checksum", > "2d54cceaa5bd: Download complete", > "43cc12582395: Verifying Checksum", > "43cc12582395: Download complete", > "2d54cceaa5bd: Pull complete", > "43cc12582395: Pull complete", > "Digest: sha256:c49ed2eab44a15c9d51f297ff1ff118a16bc0843b2066cd82d2d34f560dc605c", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1", > "2018-09-21 12:23:15,920 DEBUG: 18572 -- NET_HOST enabled", > "2018-09-21 12:23:15,920 DEBUG: 18572 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova_libvirt --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp1ReeaX:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1", > "2018-09-21 12:23:19,252 DEBUG: 18571 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 0.50 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created", > "Notice: Applied catalog in 0.08 seconds", > " Total: 10", > " Skipped: 8", > " File: 0.00", > " Exec: 0.02", > " Config retrieval: 0.57", > " Total: 0.60", > " Last run: 1537532597", > " Config: 1537532597", > "Gathering files modified after 2018-09-21 12:23:10.986619075 +0000", > "2018-09-21 12:23:19,252 DEBUG: 18571 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,iscsid_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,iscsid_config'", > "+ origin_of_time=/var/lib/config-data/iscsid.origin_of_time", > "+ touch /var/lib/config-data/iscsid.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,iscsid_config /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/iscsid", > "++ stat -c %y /var/lib/config-data/iscsid.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:23:10.986619075 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/iscsid", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/iscsid", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/iscsid.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/iscsid", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/iscsid --mtime=1970-01-01", > "2018-09-21 12:23:19,252 INFO: 18571 -- Removing container: docker-puppet-iscsid", > "2018-09-21 12:23:19,336 DEBUG: 18571 -- docker-puppet-iscsid", > "2018-09-21 12:23:19,336 INFO: 18571 -- Finished processing puppet configs for iscsid", > "2018-09-21 12:23:22,960 DEBUG: 18573 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.49 seconds", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/path_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/extension_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/overlay_ip_version]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/drop_flows_on_start]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/extensions]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/tunnel_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/vxlan_udp_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]/ensure: created", > "Notice: Applied catalog in 0.81 seconds", > " Total: 45", > " Success: 45", > " Total: 174", > " Skipped: 27", > " Out of sync: 45", > " Changed: 45", > " Neutron agent ovs: 0.02", > " Neutron plugin ml2: 0.03", > " Neutron config: 0.60", > " Last run: 1537532601", > " Config retrieval: 2.73", > " Total: 3.39", > "Gathering files modified after 2018-09-21 12:23:11.537629081 +0000", > "2018-09-21 12:23:22,961 DEBUG: 18573 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2'", > "+ origin_of_time=/var/lib/config-data/neutron.origin_of_time", > "+ touch /var/lib/config-data/neutron.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 /etc/config.pp", > "PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5)", > "PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/neutron/manifests/init.pp\", 486]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/plugins/ml2.pp\", 53]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/config.pp\", 136]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp\", 141]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 208]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/ovs.pp\", 59]", > "+ rsync_srcs+=' /var/www'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/neutron", > "++ stat -c %y /var/lib/config-data/neutron.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:23:11.537629081 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/neutron", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/neutron", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/neutron.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/neutron", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/neutron --mtime=1970-01-01", > "2018-09-21 12:23:22,961 INFO: 18573 -- Removing container: docker-puppet-neutron", > "2018-09-21 12:23:23,008 DEBUG: 18573 -- docker-puppet-neutron", > "2018-09-21 12:23:23,009 INFO: 18573 -- Finished processing puppet configs for neutron", > "2018-09-21 12:23:36,924 DEBUG: 18572 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.63 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{md5}056b96e7e8124e1bc55f77cba4e68ce7' to '{md5}7124a0c99ee79207caf2cb6411fe901d'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{md5}09c4fa846e8e27bfa3ab3325900d63ea' to '{md5}2f138c0278e1b666ec77a6d8ba3054a1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{md5}dff145cb4e519333c0096aae8de2e77c' to '{md5}d6220e899390dec5a66a43f99bde4ee2'", > "Notice: /Stage[main]/Nova::Db/Nova_config[api_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Nova_config[placement_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[glance/api_servers]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[api/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/image_service]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[cinder/catalog_info]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[os_vif_linux_bridge/use_ipv6]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_api_faults]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/os_interface]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/heal_instance_info_cache_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/resume_guests_state_on_host_boot]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/sync_power_state_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/allow_resize_to_same_host]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/vncserver_proxyclient_address]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/keymap]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[glance/verify_glance_signatures]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Libvirtd_config[listen_tls]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Libvirtd_config[listen_tcp]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{md5}14daf834c34933cae42026eaaa6c3976'", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/vncserver_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/rx_queue_size]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/tx_queue_size]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_group]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[auth_unix_ro]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[auth_unix_rw]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_ro_perms]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_rw_perms]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully", > "Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}40d961cd3154f0439fcac1a50bd77b96' to '{md5}3cd0eede37c506c8fc9deb3d490657e1'", > "Notice: Applied catalog in 9.10 seconds", > " Total: 109", > " Success: 109", > " Changed: 109", > " Out of sync: 109", > " Total: 325", > " Skipped: 48", > " Concat file: 0.00", > " Concat fragment: 0.00", > " File line: 0.00", > " Exec: 0.01", > " Libvirtd config: 0.02", > " File: 0.08", > " Package: 0.09", > " Augeas: 1.08", > " Total: 11.72", > " Last run: 1537532614", > " Config retrieval: 3.03", > " Nova config: 7.40", > " Config: 1537532602", > "Gathering files modified after 2018-09-21 12:23:16.125711645 +0000", > "2018-09-21 12:23:36,924 DEBUG: 18572 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password'", > "+ origin_of_time=/var/lib/config-data/nova_libvirt.origin_of_time", > "+ touch /var/lib/config-data/nova_libvirt.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password /etc/config.pp", > "ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute.pp\", 59]", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/config.pp\", 37]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 114]", > "Warning: Scope(Class[Nova::Db]): placement_database_connection has no effect as of pike, and may be removed in a future release", > "Warning: Scope(Class[Nova::Db]): placement_slave_connection has no effect as of pike, and may be removed in a future release", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/db.pp\", 126]:[\"/etc/puppet/modules/nova/manifests/init.pp\", 551]", > "Warning: Scope(Class[Nova]): nova::use_syslog, nova::use_stderr, nova::log_facility, nova::log_dir \\", > "and nova::debug is deprecated and has been moved to nova::logging class, please set them there.", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 561]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute.pp\", 59]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/placement.pp\", 101]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 138]", > "Warning: Scope(Class[Nova::Placement]): The os_region_name parameter is deprecated and will be removed \\", > "in a future release. Please use region_name instead.", > "Warning: Unknown variable: '::nova::vncproxy::host'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:31:5", > "Warning: Unknown variable: '::nova::vncproxy::vncproxy_protocol'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:36:5", > "Warning: Unknown variable: '::nova::vncproxy::port'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:41:5", > "Warning: Unknown variable: '::nova::vncproxy::vncproxy_path'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:46:5", > "Warning: Unknown variable: '::nova::compute::pci_passthrough'. at /etc/puppet/modules/nova/manifests/compute/pci.pp:19:38", > "Warning: Unknown variable: '::nova::api::default_floating_pool'. at /etc/puppet/modules/nova/manifests/network/neutron.pp:112:38", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/compute/libvirt.pp\", 278]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute/libvirt.pp\", 33]", > " with Stdlib::Compat::Ip_Address. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/migration/target.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/migration/target.pp\", 56]", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: Exec[set libvirt sasl credentials](provider=posix): Cannot understand environment setting \"TLS_PASSWORD=\"", > "+ rsync_srcs+=' /var/lib/nova/.ssh'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/nova/.ssh /var/lib/config-data/nova_libvirt", > "++ stat -c %y /var/lib/config-data/nova_libvirt.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:23:16.125711645 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova_libvirt", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova_libvirt", > "++ find /etc /root /opt /var/spool/cron /var/lib/nova/.ssh -newer /var/lib/config-data/nova_libvirt.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/nova_libvirt", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/nova_libvirt --mtime=1970-01-01", > "2018-09-21 12:23:36,924 INFO: 18572 -- Removing container: docker-puppet-nova_libvirt", > "2018-09-21 12:23:36,959 DEBUG: 18572 -- docker-puppet-nova_libvirt", > "2018-09-21 12:23:36,959 INFO: 18572 -- Finished processing puppet configs for nova_libvirt", > "2018-09-21 12:23:36,960 DEBUG: 18570 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-09-21 12:23:36,960 DEBUG: 18570 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-09-21 12:23:36,962 DEBUG: 18570 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-09-21 12:23:36,962 DEBUG: 18570 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-09-21 12:23:36,963 DEBUG: 18570 -- Updating config hash for neutron_ovs_bridge, config_volume=iscsid hash=6fa87e3d42e4a1e426d64283a8a53d4f", > "2018-09-21 12:23:36,963 DEBUG: 18570 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-09-21 12:23:36,963 DEBUG: 18570 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-09-21 12:23:36,963 DEBUG: 18570 -- Updating config hash for nova_libvirt, config_volume=iscsid hash=c664cd3d1cf02bec826ce2c6dff37a56", > "2018-09-21 12:23:36,963 DEBUG: 18570 -- Updating config hash for nova_virtlogd, config_volume=iscsid hash=c664cd3d1cf02bec826ce2c6dff37a56", > "2018-09-21 12:23:36,964 DEBUG: 18570 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-09-21 12:23:36,965 DEBUG: 18570 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-09-21 12:23:36,965 DEBUG: 18570 -- Updating config hash for ceilometer_agent_compute, config_volume=iscsid hash=7bbf3e5c4791465ca3a5fc8a170e25fb", > "2018-09-21 12:23:36,965 DEBUG: 18570 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt/etc", > "2018-09-21 12:23:36,965 DEBUG: 18570 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-09-21 12:23:36,965 DEBUG: 18570 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-09-21 12:23:36,965 DEBUG: 18570 -- Updating config hash for neutron_ovs_agent, config_volume=iscsid hash=6fa87e3d42e4a1e426d64283a8a53d4f", > "2018-09-21 12:23:36,965 DEBUG: 18570 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-09-21 12:23:36,965 DEBUG: 18570 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-09-21 12:23:36,965 DEBUG: 18570 -- Updating config hash for nova_migration_target, config_volume=iscsid hash=c664cd3d1cf02bec826ce2c6dff37a56", > "2018-09-21 12:23:36,966 DEBUG: 18570 -- Updating config hash for nova_compute, config_volume=iscsid hash=c664cd3d1cf02bec826ce2c6dff37a56", > "2018-09-21 12:23:36,966 DEBUG: 18570 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-09-21 12:23:36,966 DEBUG: 18570 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-09-21 12:23:36,966 DEBUG: 18570 -- Updating config hash for logrotate_crond, config_volume=iscsid hash=6f2a5e23a896d70ebbc2c66d87cd9266" > ] >} >ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-09-21 12:22:38,344 INFO: 28579 -- Running docker-puppet", > "2018-09-21 12:22:38,344 DEBUG: 28579 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-09-21 12:22:38,344 DEBUG: 28579 -- config_volume aodh", > "2018-09-21 12:22:38,345 DEBUG: 28579 -- puppet_tags aodh_api_paste_ini,aodh_config", > "2018-09-21 12:22:38,345 DEBUG: 28579 -- manifest include tripleo::profile::base::aodh::api", > "", > "include ::tripleo::profile::base::database::mysql::client", > "2018-09-21 12:22:38,345 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1", > "2018-09-21 12:22:38,345 DEBUG: 28579 -- volumes []", > "2018-09-21 12:22:38,345 DEBUG: 28579 -- Adding new service", > "2018-09-21 12:22:38,345 DEBUG: 28579 -- config_volume aodh", > "2018-09-21 12:22:38,345 DEBUG: 28579 -- puppet_tags aodh_config", > "2018-09-21 12:22:38,345 DEBUG: 28579 -- manifest include tripleo::profile::base::aodh::evaluator", > "2018-09-21 12:22:38,345 DEBUG: 28579 -- Existing service, appending puppet tags and manifest", > "2018-09-21 12:22:38,345 DEBUG: 28579 -- manifest include tripleo::profile::base::aodh::listener", > "2018-09-21 12:22:38,346 DEBUG: 28579 -- config_volume aodh", > "2018-09-21 12:22:38,346 DEBUG: 28579 -- puppet_tags aodh_config", > "2018-09-21 12:22:38,346 DEBUG: 28579 -- manifest include tripleo::profile::base::aodh::notifier", > "2018-09-21 12:22:38,346 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1", > "2018-09-21 12:22:38,346 DEBUG: 28579 -- volumes []", > "2018-09-21 12:22:38,346 DEBUG: 28579 -- Existing service, appending puppet tags and manifest", > "2018-09-21 12:22:38,346 DEBUG: 28579 -- config_volume ceilometer", > "2018-09-21 12:22:38,346 DEBUG: 28579 -- puppet_tags ceilometer_config", > "2018-09-21 12:22:38,346 DEBUG: 28579 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "2018-09-21 12:22:38,346 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1", > "2018-09-21 12:22:38,346 DEBUG: 28579 -- Adding new service", > "2018-09-21 12:22:38,346 DEBUG: 28579 -- manifest include ::tripleo::profile::base::ceilometer::agent::notification", > "2018-09-21 12:22:38,346 DEBUG: 28579 -- config_volume cinder", > "2018-09-21 12:22:38,346 DEBUG: 28579 -- puppet_tags cinder_config,cinder_type,file,concat,file_line", > "2018-09-21 12:22:38,347 DEBUG: 28579 -- manifest include ::tripleo::profile::base::cinder::api", > "2018-09-21 12:22:38,347 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", > "2018-09-21 12:22:38,347 DEBUG: 28579 -- volumes []", > "2018-09-21 12:22:38,347 DEBUG: 28579 -- Adding new service", > "2018-09-21 12:22:38,347 DEBUG: 28579 -- config_volume cinder", > "2018-09-21 12:22:38,347 DEBUG: 28579 -- puppet_tags cinder_config,file,concat,file_line", > "2018-09-21 12:22:38,347 DEBUG: 28579 -- manifest include ::tripleo::profile::base::cinder::backup::ceph", > "2018-09-21 12:22:38,347 DEBUG: 28579 -- Existing service, appending puppet tags and manifest", > "2018-09-21 12:22:38,347 DEBUG: 28579 -- manifest include ::tripleo::profile::base::cinder::scheduler", > "2018-09-21 12:22:38,347 DEBUG: 28579 -- manifest include ::tripleo::profile::base::lvm", > "include ::tripleo::profile::base::cinder::volume", > "2018-09-21 12:22:38,348 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", > "2018-09-21 12:22:38,348 DEBUG: 28579 -- volumes []", > "2018-09-21 12:22:38,348 DEBUG: 28579 -- Existing service, appending puppet tags and manifest", > "2018-09-21 12:22:38,348 DEBUG: 28579 -- config_volume clustercheck", > "2018-09-21 12:22:38,348 DEBUG: 28579 -- puppet_tags file", > "2018-09-21 12:22:38,348 DEBUG: 28579 -- manifest include ::tripleo::profile::pacemaker::clustercheck", > "2018-09-21 12:22:38,348 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", > "2018-09-21 12:22:38,348 DEBUG: 28579 -- Adding new service", > "2018-09-21 12:22:38,348 DEBUG: 28579 -- config_volume glance_api", > "2018-09-21 12:22:38,348 DEBUG: 28579 -- puppet_tags glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config", > "2018-09-21 12:22:38,348 DEBUG: 28579 -- manifest include ::tripleo::profile::base::glance::api", > "2018-09-21 12:22:38,348 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1", > "2018-09-21 12:22:38,348 DEBUG: 28579 -- config_volume gnocchi", > "2018-09-21 12:22:38,348 DEBUG: 28579 -- puppet_tags gnocchi_api_paste_ini,gnocchi_config", > "2018-09-21 12:22:38,348 DEBUG: 28579 -- manifest include ::tripleo::profile::base::gnocchi::api", > "2018-09-21 12:22:38,348 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", > "2018-09-21 12:22:38,349 DEBUG: 28579 -- config_volume gnocchi", > "2018-09-21 12:22:38,349 DEBUG: 28579 -- puppet_tags gnocchi_config", > "2018-09-21 12:22:38,349 DEBUG: 28579 -- manifest include ::tripleo::profile::base::gnocchi::metricd", > "2018-09-21 12:22:38,349 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", > "2018-09-21 12:22:38,349 DEBUG: 28579 -- volumes []", > "2018-09-21 12:22:38,349 DEBUG: 28579 -- Existing service, appending puppet tags and manifest", > "2018-09-21 12:22:38,349 DEBUG: 28579 -- manifest include ::tripleo::profile::base::gnocchi::statsd", > "2018-09-21 12:22:38,349 DEBUG: 28579 -- config_volume haproxy", > "2018-09-21 12:22:38,349 DEBUG: 28579 -- puppet_tags haproxy_config", > "2018-09-21 12:22:38,349 DEBUG: 28579 -- manifest exec {'wait-for-settle': command => '/bin/true' }", > "class tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}", > "['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::pacemaker::haproxy_bundle", > "2018-09-21 12:22:38,349 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1", > "2018-09-21 12:22:38,349 DEBUG: 28579 -- volumes [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']", > "2018-09-21 12:22:38,349 DEBUG: 28579 -- Adding new service", > "2018-09-21 12:22:38,349 DEBUG: 28579 -- config_volume heat_api", > "2018-09-21 12:22:38,349 DEBUG: 28579 -- puppet_tags heat_config,file,concat,file_line", > "2018-09-21 12:22:38,350 DEBUG: 28579 -- manifest include ::tripleo::profile::base::heat::api", > "2018-09-21 12:22:38,350 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1", > "2018-09-21 12:22:38,350 DEBUG: 28579 -- volumes []", > "2018-09-21 12:22:38,350 DEBUG: 28579 -- Adding new service", > "2018-09-21 12:22:38,350 DEBUG: 28579 -- config_volume heat_api_cfn", > "2018-09-21 12:22:38,350 DEBUG: 28579 -- puppet_tags heat_config,file,concat,file_line", > "2018-09-21 12:22:38,350 DEBUG: 28579 -- manifest include ::tripleo::profile::base::heat::api_cfn", > "2018-09-21 12:22:38,350 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-20.1", > "2018-09-21 12:22:38,350 DEBUG: 28579 -- config_volume heat", > "2018-09-21 12:22:38,350 DEBUG: 28579 -- manifest include ::tripleo::profile::base::heat::engine", > "2018-09-21 12:22:38,350 DEBUG: 28579 -- config_volume horizon", > "2018-09-21 12:22:38,350 DEBUG: 28579 -- puppet_tags horizon_config", > "2018-09-21 12:22:38,350 DEBUG: 28579 -- manifest include ::tripleo::profile::base::horizon", > "2018-09-21 12:22:38,350 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1", > "2018-09-21 12:22:38,351 DEBUG: 28579 -- config_volume iscsid", > "2018-09-21 12:22:38,351 DEBUG: 28579 -- puppet_tags iscsid_config", > "2018-09-21 12:22:38,351 DEBUG: 28579 -- manifest include ::tripleo::profile::base::iscsid", > "2018-09-21 12:22:38,351 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1", > "2018-09-21 12:22:38,351 DEBUG: 28579 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-09-21 12:22:38,351 DEBUG: 28579 -- Adding new service", > "2018-09-21 12:22:38,351 DEBUG: 28579 -- config_volume keystone", > "2018-09-21 12:22:38,351 DEBUG: 28579 -- puppet_tags keystone_config,keystone_domain_config", > "2018-09-21 12:22:38,351 DEBUG: 28579 -- manifest ['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::base::keystone", > "2018-09-21 12:22:38,351 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", > "2018-09-21 12:22:38,351 DEBUG: 28579 -- volumes []", > "2018-09-21 12:22:38,351 DEBUG: 28579 -- config_volume memcached", > "2018-09-21 12:22:38,351 DEBUG: 28579 -- puppet_tags file", > "2018-09-21 12:22:38,351 DEBUG: 28579 -- manifest include ::tripleo::profile::base::memcached", > "2018-09-21 12:22:38,351 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-20.1", > "2018-09-21 12:22:38,351 DEBUG: 28579 -- config_volume mysql", > "2018-09-21 12:22:38,351 DEBUG: 28579 -- manifest ['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }", > "exec {'wait-for-settle': command => '/bin/true' }", > "include ::tripleo::profile::pacemaker::database::mysql_bundle", > "2018-09-21 12:22:38,352 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", > "2018-09-21 12:22:38,352 DEBUG: 28579 -- volumes []", > "2018-09-21 12:22:38,352 DEBUG: 28579 -- Adding new service", > "2018-09-21 12:22:38,352 DEBUG: 28579 -- config_volume neutron", > "2018-09-21 12:22:38,352 DEBUG: 28579 -- puppet_tags neutron_config,neutron_api_config", > "2018-09-21 12:22:38,352 DEBUG: 28579 -- manifest include tripleo::profile::base::neutron::server", > "2018-09-21 12:22:38,352 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", > "2018-09-21 12:22:38,352 DEBUG: 28579 -- puppet_tags neutron_plugin_ml2", > "2018-09-21 12:22:38,352 DEBUG: 28579 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "2018-09-21 12:22:38,352 DEBUG: 28579 -- Existing service, appending puppet tags and manifest", > "2018-09-21 12:22:38,352 DEBUG: 28579 -- puppet_tags neutron_config,neutron_dhcp_agent_config", > "2018-09-21 12:22:38,352 DEBUG: 28579 -- manifest include tripleo::profile::base::neutron::dhcp", > "2018-09-21 12:22:38,353 DEBUG: 28579 -- config_volume neutron", > "2018-09-21 12:22:38,353 DEBUG: 28579 -- puppet_tags neutron_config,neutron_l3_agent_config", > "2018-09-21 12:22:38,353 DEBUG: 28579 -- manifest include tripleo::profile::base::neutron::l3", > "2018-09-21 12:22:38,353 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", > "2018-09-21 12:22:38,353 DEBUG: 28579 -- volumes []", > "2018-09-21 12:22:38,353 DEBUG: 28579 -- Existing service, appending puppet tags and manifest", > "2018-09-21 12:22:38,353 DEBUG: 28579 -- puppet_tags neutron_config,neutron_metadata_agent_config", > "2018-09-21 12:22:38,353 DEBUG: 28579 -- manifest include tripleo::profile::base::neutron::metadata", > "2018-09-21 12:22:38,353 DEBUG: 28579 -- puppet_tags neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-09-21 12:22:38,353 DEBUG: 28579 -- manifest include ::tripleo::profile::base::neutron::ovs", > "2018-09-21 12:22:38,353 DEBUG: 28579 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-09-21 12:22:38,353 DEBUG: 28579 -- config_volume nova", > "2018-09-21 12:22:38,353 DEBUG: 28579 -- puppet_tags nova_config", > "2018-09-21 12:22:38,353 DEBUG: 28579 -- manifest ['Nova_cell_v2'].each |String $val| { noop_resource($val) }", > "include tripleo::profile::base::nova::api", > "2018-09-21 12:22:38,354 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", > "2018-09-21 12:22:38,354 DEBUG: 28579 -- volumes []", > "2018-09-21 12:22:38,354 DEBUG: 28579 -- Adding new service", > "2018-09-21 12:22:38,354 DEBUG: 28579 -- config_volume nova", > "2018-09-21 12:22:38,354 DEBUG: 28579 -- puppet_tags nova_config", > "2018-09-21 12:22:38,354 DEBUG: 28579 -- manifest include tripleo::profile::base::nova::conductor", > "2018-09-21 12:22:38,354 DEBUG: 28579 -- Existing service, appending puppet tags and manifest", > "2018-09-21 12:22:38,354 DEBUG: 28579 -- manifest include tripleo::profile::base::nova::consoleauth", > "2018-09-21 12:22:38,354 DEBUG: 28579 -- config_volume nova_placement", > "2018-09-21 12:22:38,354 DEBUG: 28579 -- manifest include tripleo::profile::base::nova::placement", > "2018-09-21 12:22:38,354 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1", > "2018-09-21 12:22:38,355 DEBUG: 28579 -- config_volume nova", > "2018-09-21 12:22:38,355 DEBUG: 28579 -- puppet_tags nova_config", > "2018-09-21 12:22:38,355 DEBUG: 28579 -- manifest include tripleo::profile::base::nova::scheduler", > "2018-09-21 12:22:38,355 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", > "2018-09-21 12:22:38,355 DEBUG: 28579 -- volumes []", > "2018-09-21 12:22:38,355 DEBUG: 28579 -- Existing service, appending puppet tags and manifest", > "2018-09-21 12:22:38,355 DEBUG: 28579 -- manifest include tripleo::profile::base::nova::vncproxy", > "2018-09-21 12:22:38,355 DEBUG: 28579 -- config_volume crond", > "2018-09-21 12:22:38,355 DEBUG: 28579 -- puppet_tags ", > "2018-09-21 12:22:38,355 DEBUG: 28579 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-09-21 12:22:38,355 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", > "2018-09-21 12:22:38,355 DEBUG: 28579 -- Adding new service", > "2018-09-21 12:22:38,355 DEBUG: 28579 -- config_volume panko", > "2018-09-21 12:22:38,355 DEBUG: 28579 -- puppet_tags panko_api_paste_ini,panko_config", > "2018-09-21 12:22:38,355 DEBUG: 28579 -- manifest include tripleo::profile::base::panko::api", > "2018-09-21 12:22:38,356 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1", > "2018-09-21 12:22:38,356 DEBUG: 28579 -- volumes []", > "2018-09-21 12:22:38,356 DEBUG: 28579 -- Adding new service", > "2018-09-21 12:22:38,356 DEBUG: 28579 -- config_volume rabbitmq", > "2018-09-21 12:22:38,356 DEBUG: 28579 -- puppet_tags file", > "2018-09-21 12:22:38,356 DEBUG: 28579 -- manifest ['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::base::rabbitmq", > "2018-09-21 12:22:38,356 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1", > "2018-09-21 12:22:38,356 DEBUG: 28579 -- config_volume redis", > "2018-09-21 12:22:38,356 DEBUG: 28579 -- puppet_tags exec", > "2018-09-21 12:22:38,356 DEBUG: 28579 -- manifest include ::tripleo::profile::pacemaker::database::redis_bundle", > "2018-09-21 12:22:38,356 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1", > "2018-09-21 12:22:38,356 DEBUG: 28579 -- config_volume sahara", > "2018-09-21 12:22:38,356 DEBUG: 28579 -- puppet_tags sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-09-21 12:22:38,356 DEBUG: 28579 -- manifest include ::tripleo::profile::base::sahara::api", > "2018-09-21 12:22:38,356 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1", > "2018-09-21 12:22:38,357 DEBUG: 28579 -- puppet_tags sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-09-21 12:22:38,357 DEBUG: 28579 -- manifest include ::tripleo::profile::base::sahara::engine", > "2018-09-21 12:22:38,357 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1", > "2018-09-21 12:22:38,357 DEBUG: 28579 -- volumes []", > "2018-09-21 12:22:38,357 DEBUG: 28579 -- Existing service, appending puppet tags and manifest", > "2018-09-21 12:22:38,357 DEBUG: 28579 -- config_volume swift", > "2018-09-21 12:22:38,357 DEBUG: 28579 -- puppet_tags swift_config,swift_proxy_config,swift_keymaster_config", > "2018-09-21 12:22:38,357 DEBUG: 28579 -- manifest include ::tripleo::profile::base::swift::proxy", > "2018-09-21 12:22:38,357 DEBUG: 28579 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1", > "2018-09-21 12:22:38,357 DEBUG: 28579 -- Adding new service", > "2018-09-21 12:22:38,357 DEBUG: 28579 -- config_volume swift_ringbuilder", > "2018-09-21 12:22:38,357 DEBUG: 28579 -- puppet_tags exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball", > "2018-09-21 12:22:38,357 DEBUG: 28579 -- manifest include ::tripleo::profile::base::swift::ringbuilder", > "2018-09-21 12:22:38,357 DEBUG: 28579 -- puppet_tags swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server", > "2018-09-21 12:22:38,357 DEBUG: 28579 -- manifest include ::tripleo::profile::base::swift::storage", > "class xinetd() {}", > "2018-09-21 12:22:38,358 DEBUG: 28579 -- volumes []", > "2018-09-21 12:22:38,358 DEBUG: 28579 -- Existing service, appending puppet tags and manifest", > "2018-09-21 12:22:38,358 INFO: 28579 -- Service compilation completed.", > "2018-09-21 12:22:38,358 DEBUG: 28579 -- - [u'nova_placement', u'file,file_line,concat,augeas,cron,nova_config', u'include tripleo::profile::base::nova::placement\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1', []]", > "2018-09-21 12:22:38,358 DEBUG: 28579 -- - [u'aodh', u'file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config', u'include tripleo::profile::base::aodh::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::evaluator\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::listener\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::notifier\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1', []]", > "2018-09-21 12:22:38,358 DEBUG: 28579 -- - [u'heat_api', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::api\\n', u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1', []]", > "2018-09-21 12:22:38,359 DEBUG: 28579 -- - [u'swift_ringbuilder', u'file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball', u'include ::tripleo::profile::base::swift::ringbuilder', u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1', []]", > "2018-09-21 12:22:38,359 DEBUG: 28579 -- - [u'sahara', u'file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template', u'include ::tripleo::profile::base::sahara::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::sahara::engine\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1', []]", > "2018-09-21 12:22:38,359 DEBUG: 28579 -- - [u'mysql', u'file,file_line,concat,augeas,cron,file', u\"['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }\\nexec {'wait-for-settle': command => '/bin/true' }\\ninclude ::tripleo::profile::pacemaker::database::mysql_bundle\", u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', []]", > "2018-09-21 12:22:38,359 DEBUG: 28579 -- - [u'gnocchi', u'file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config', u'include ::tripleo::profile::base::gnocchi::api\\n\\ninclude ::tripleo::profile::base::gnocchi::metricd\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::gnocchi::statsd\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1', []]", > "2018-09-21 12:22:38,359 DEBUG: 28579 -- - [u'clustercheck', u'file,file_line,concat,augeas,cron,file', u'include ::tripleo::profile::pacemaker::clustercheck', u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', []]", > "2018-09-21 12:22:38,359 DEBUG: 28579 -- - [u'redis', u'file,file_line,concat,augeas,cron,exec', u'include ::tripleo::profile::pacemaker::database::redis_bundle', u'192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1', []]", > "2018-09-21 12:22:38,359 DEBUG: 28579 -- - [u'nova', u'file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config', u\"['Nova_cell_v2'].each |String $val| { noop_resource($val) }\\ninclude tripleo::profile::base::nova::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::conductor\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::consoleauth\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::scheduler\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::vncproxy\\n\\ninclude ::tripleo::profile::base::database::mysql::client\", u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', []]", > "2018-09-21 12:22:38,359 DEBUG: 28579 -- - [u'iscsid', u'file,file_line,concat,augeas,cron,iscsid_config', u'include ::tripleo::profile::base::iscsid', u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1', [u'/etc/iscsi:/etc/iscsi']]", > "2018-09-21 12:22:38,359 DEBUG: 28579 -- - [u'glance_api', u'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config', u'include ::tripleo::profile::base::glance::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1', []]", > "2018-09-21 12:22:38,359 DEBUG: 28579 -- - [u'keystone', u'file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config', u\"['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::base::keystone\\n\\ninclude ::tripleo::profile::base::database::mysql::client\", u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1', []]", > "2018-09-21 12:22:38,359 DEBUG: 28579 -- - [u'memcached', u'file,file_line,concat,augeas,cron,file', u'include ::tripleo::profile::base::memcached\\n', u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-20.1', []]", > "2018-09-21 12:22:38,359 DEBUG: 28579 -- - [u'panko', u'file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config', u'include tripleo::profile::base::panko::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1', []]", > "2018-09-21 12:22:38,359 DEBUG: 28579 -- - [u'heat', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::engine\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1', []]", > "2018-09-21 12:22:38,359 DEBUG: 28579 -- - [u'cinder', u'file,file_line,concat,augeas,cron,cinder_config,cinder_type,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line', u'include ::tripleo::profile::base::cinder::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::cinder::backup::ceph\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::cinder::scheduler\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::lvm\\ninclude ::tripleo::profile::base::cinder::volume\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1', []]", > "2018-09-21 12:22:38,359 DEBUG: 28579 -- - [u'swift', u'file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server', u'include ::tripleo::profile::base::swift::proxy\\n\\ninclude ::tripleo::profile::base::swift::storage\\n\\nclass xinetd() {}', u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1', []]", > "2018-09-21 12:22:38,359 DEBUG: 28579 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1', []]", > "2018-09-21 12:22:38,359 DEBUG: 28579 -- - [u'haproxy', u'file,file_line,concat,augeas,cron,haproxy_config', u\"exec {'wait-for-settle': command => '/bin/true' }\\nclass tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}\\n['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::pacemaker::haproxy_bundle\", u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1', [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']]", > "2018-09-21 12:22:38,359 DEBUG: 28579 -- - [u'ceilometer', u'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', u'include ::tripleo::profile::base::ceilometer::agent::polling\\n\\ninclude ::tripleo::profile::base::ceilometer::agent::notification\\n', u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1', []]", > "2018-09-21 12:22:38,359 DEBUG: 28579 -- - [u'rabbitmq', u'file,file_line,concat,augeas,cron,file', u\"['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::base::rabbitmq\\n\", u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1', []]", > "2018-09-21 12:22:38,360 DEBUG: 28579 -- - [u'neutron', u'file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2', u'include tripleo::profile::base::neutron::server\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::neutron::plugins::ml2\\n\\ninclude tripleo::profile::base::neutron::dhcp\\n\\ninclude tripleo::profile::base::neutron::l3\\n\\ninclude tripleo::profile::base::neutron::metadata\\n\\ninclude ::tripleo::profile::base::neutron::ovs\\n', u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1', [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']]", > "2018-09-21 12:22:38,360 DEBUG: 28579 -- - [u'horizon', u'file,file_line,concat,augeas,cron,horizon_config', u'include ::tripleo::profile::base::horizon\\n', u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1', []]", > "2018-09-21 12:22:38,360 DEBUG: 28579 -- - [u'heat_api_cfn', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::api_cfn\\n', u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-20.1', []]", > "2018-09-21 12:22:38,360 INFO: 28579 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-09-21 12:22:38,371 INFO: 28580 -- Starting configuration of nova_placement using image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1", > "2018-09-21 12:22:38,371 DEBUG: 28580 -- config_volume nova_placement", > "2018-09-21 12:22:38,371 INFO: 28581 -- Starting configuration of swift_ringbuilder using image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1", > "2018-09-21 12:22:38,371 DEBUG: 28580 -- puppet_tags file,file_line,concat,augeas,cron,nova_config", > "2018-09-21 12:22:38,371 DEBUG: 28581 -- config_volume swift_ringbuilder", > "2018-09-21 12:22:38,372 DEBUG: 28580 -- manifest include tripleo::profile::base::nova::placement", > "2018-09-21 12:22:38,372 DEBUG: 28581 -- puppet_tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball", > "2018-09-21 12:22:38,372 DEBUG: 28580 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1", > "2018-09-21 12:22:38,372 DEBUG: 28581 -- manifest include ::tripleo::profile::base::swift::ringbuilder", > "2018-09-21 12:22:38,372 DEBUG: 28580 -- volumes []", > "2018-09-21 12:22:38,372 DEBUG: 28581 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1", > "2018-09-21 12:22:38,372 DEBUG: 28581 -- volumes []", > "2018-09-21 12:22:38,372 INFO: 28582 -- Starting configuration of gnocchi using image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", > "2018-09-21 12:22:38,372 DEBUG: 28582 -- config_volume gnocchi", > "2018-09-21 12:22:38,372 DEBUG: 28582 -- puppet_tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config", > "2018-09-21 12:22:38,372 DEBUG: 28582 -- manifest include ::tripleo::profile::base::gnocchi::api", > "include ::tripleo::profile::base::gnocchi::metricd", > "include ::tripleo::profile::base::gnocchi::statsd", > "2018-09-21 12:22:38,372 DEBUG: 28582 -- config_image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", > "2018-09-21 12:22:38,372 DEBUG: 28582 -- volumes []", > "2018-09-21 12:22:38,373 INFO: 28580 -- Removing container: docker-puppet-nova_placement", > "2018-09-21 12:22:38,373 INFO: 28581 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-09-21 12:22:38,373 INFO: 28582 -- Removing container: docker-puppet-gnocchi", > "2018-09-21 12:22:38,453 INFO: 28580 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1", > "2018-09-21 12:22:38,453 INFO: 28582 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", > "2018-09-21 12:22:38,454 INFO: 28581 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1", > "2018-09-21 12:22:59,251 DEBUG: 28581 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server", > "378837c0e24a: Pulling fs layer", > "e17262bc2341: Pulling fs layer", > "86a0e618a180: Pulling fs layer", > "dfa58d50e0a3: Pulling fs layer", > "d006a62af35a: Pulling fs layer", > "8ec68d25d80c: Pulling fs layer", > "dfa58d50e0a3: Waiting", > "8ec68d25d80c: Waiting", > "d006a62af35a: Waiting", > "e17262bc2341: Download complete", > "dfa58d50e0a3: Verifying Checksum", > "dfa58d50e0a3: Download complete", > "d006a62af35a: Verifying Checksum", > "d006a62af35a: Download complete", > "378837c0e24a: Download complete", > "8ec68d25d80c: Verifying Checksum", > "8ec68d25d80c: Download complete", > "86a0e618a180: Verifying Checksum", > "86a0e618a180: Download complete", > "378837c0e24a: Pull complete", > "e17262bc2341: Pull complete", > "86a0e618a180: Pull complete", > "dfa58d50e0a3: Pull complete", > "d006a62af35a: Pull complete", > "8ec68d25d80c: Pull complete", > "Digest: sha256:048f27371158fb6edec6f541f36dd4601a5ffd4f7f1a46a99e73ad2555a014a6", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1", > "2018-09-21 12:22:59,255 DEBUG: 28581 -- NET_HOST enabled", > "2018-09-21 12:22:59,256 DEBUG: 28581 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-swift_ringbuilder --env PUPPET_TAGS=file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball --env NAME=swift_ringbuilder --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpvrxkut:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1", > "2018-09-21 12:23:00,702 DEBUG: 28582 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-gnocchi-api ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-gnocchi-api", > "4147c1389cf3: Pulling fs layer", > "a602fc58f1a4: Pulling fs layer", > "4147c1389cf3: Waiting", > "a602fc58f1a4: Waiting", > "a602fc58f1a4: Verifying Checksum", > "a602fc58f1a4: Download complete", > "4147c1389cf3: Verifying Checksum", > "4147c1389cf3: Download complete", > "4147c1389cf3: Pull complete", > "a602fc58f1a4: Pull complete", > "Digest: sha256:93e387fae1c2eeaf49aead4b87c640a19f2b2053a25de20a9fd51e37b7ac403c", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", > "2018-09-21 12:23:00,707 DEBUG: 28582 -- NET_HOST enabled", > "2018-09-21 12:23:00,707 DEBUG: 28582 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-gnocchi --env PUPPET_TAGS=file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config --env NAME=gnocchi --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmprfXDPl:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", > "2018-09-21 12:23:03,162 DEBUG: 28580 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-placement-api ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-placement-api", > "2d54cceaa5bd: Pulling fs layer", > "74cf9e3625c8: Pulling fs layer", > "74cf9e3625c8: Waiting", > "2d54cceaa5bd: Waiting", > "74cf9e3625c8: Verifying Checksum", > "74cf9e3625c8: Download complete", > "2d54cceaa5bd: Verifying Checksum", > "2d54cceaa5bd: Download complete", > "2d54cceaa5bd: Pull complete", > "74cf9e3625c8: Pull complete", > "Digest: sha256:73170043b509d5dac0d426358c5132c330f4c12efdc1eba90be316655a5780ac", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1", > "2018-09-21 12:23:03,165 DEBUG: 28580 -- NET_HOST enabled", > "2018-09-21 12:23:03,165 DEBUG: 28580 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova_placement --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config --env NAME=nova_placement --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpigWmoO:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1", > "2018-09-21 12:23:15,437 DEBUG: 28581 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.13 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[fetch_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[extract_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[extract_swift_ring_tarball]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Swift/File[/var/lib/swift]/group: group changed 'root' to 'swift'", > "Notice: /Stage[main]/Swift/File[/etc/swift/swift.conf]/owner: owner changed 'root' to 'swift'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[object]/Exec[create_object]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[account]/Exec[create_account]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[container]/Exec[create_container]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.14:%PORT%/d1]/Ring_object_device[172.17.4.14:6000/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.14:%PORT%/d1]/Ring_container_device[172.17.4.14:6001/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.14:%PORT%/d1]/Ring_account_device[172.17.4.14:6002/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[object]/Exec[rebalance_object]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[account]/Exec[rebalance_account]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[container]/Exec[rebalance_container]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[create_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[create_swift_ring_tarball]: Triggered 'refresh' from 3 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[upload_swift_ring_tarball]: Triggered 'refresh' from 2 events", > "Notice: Applied catalog in 4.76 seconds", > "Changes:", > " Total: 11", > "Events:", > " Success: 11", > "Resources:", > " Changed: 11", > " Out of sync: 11", > " Skipped: 19", > " Total: 36", > " Restarted: 6", > "Time:", > " File: 0.01", > " Ring object device: 0.58", > " Ring account device: 0.60", > " Ring container device: 0.63", > " Config retrieval: 1.27", > " Exec: 1.46", > " Last run: 1537532594", > " Total: 4.55", > "Version:", > " Config: 1537532588", > " Puppet: 4.8.2", > "Gathering files modified after 2018-09-21 12:22:59.599712740 +0000", > "2018-09-21 12:23:15,437 DEBUG: 28581 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball'", > "+ origin_of_time=/var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ touch /var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=controller-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball /etc/config.pp", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "Warning: ModuleLoader: module 'swift' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/ringbuilder.pp\", 113]:[\"/etc/config.pp\", 2]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/ringbuilder/create.pp\", 44]:", > "Warning: Unexpected line: Ring file /etc/swift/object.ring.gz not found, probably it hasn't been written yet", > "Warning: Unexpected line: Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta", > "Warning: Unexpected line: There are no devices in this ring, or all devices have been deleted", > "Warning: Unexpected line: Ring file /etc/swift/container.ring.gz not found, probably it hasn't been written yet", > "Warning: Unexpected line: Ring file /etc/swift/account.ring.gz not found, probably it hasn't been written yet", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ rsync_srcs+=' /var/www'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/swift_ringbuilder", > "++ stat -c %y /var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:22:59.599712740 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/swift_ringbuilder", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/swift_ringbuilder", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/swift_ringbuilder.origin_of_time -not -path '/etc/puppet*' -print0", > "+ EXCLUDE='--exclude=*/etc/swift/backups/* --exclude=*/etc/swift/*.ring.gz --exclude=*/etc/swift/*.builder --exclude=*/etc/libvirt/passwd.db'", > "+ tar xO", > "+ sed '/^#.*HEADER.*/d'", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/swift_ringbuilder", > "+ md5sum", > "+ awk '{print $1}'", > "tar: Removing leading `/' from member names", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/swift_ringbuilder --mtime=1970-01-01", > "2018-09-21 12:23:15,438 INFO: 28581 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-09-21 12:23:15,511 DEBUG: 28581 -- docker-puppet-swift_ringbuilder", > "2018-09-21 12:23:15,511 INFO: 28581 -- Finished processing puppet configs for swift_ringbuilder", > "2018-09-21 12:23:15,512 INFO: 28581 -- Starting configuration of sahara using image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1", > "2018-09-21 12:23:15,513 DEBUG: 28581 -- config_volume sahara", > "2018-09-21 12:23:15,513 DEBUG: 28581 -- puppet_tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-09-21 12:23:15,513 DEBUG: 28581 -- manifest include ::tripleo::profile::base::sahara::api", > "include ::tripleo::profile::base::sahara::engine", > "2018-09-21 12:23:15,513 DEBUG: 28581 -- config_image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1", > "2018-09-21 12:23:15,513 DEBUG: 28581 -- volumes []", > "2018-09-21 12:23:15,514 INFO: 28581 -- Removing container: docker-puppet-sahara", > "2018-09-21 12:23:15,586 INFO: 28581 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1", > "2018-09-21 12:23:16,438 DEBUG: 28582 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.52 seconds", > "Notice: /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as '{md5}9da85e58f3bd6c780ce76db603b7f028'", > "Notice: /Stage[main]/Apache::Mod::Mime_magic/File[mime_magic.conf]/ensure: defined content as '{md5}b258529b332429e2ff8344f726a95457'", > "Notice: /Stage[main]/Apache::Mod::Alias/File[alias.conf]/ensure: defined content as '{md5}983e865be85f5e0daaed7433db82995e'", > "Notice: /Stage[main]/Apache::Mod::Autoindex/File[autoindex.conf]/ensure: defined content as '{md5}2421a3c6df32c7e38c2a7a22afdf5728'", > "Notice: /Stage[main]/Apache::Mod::Deflate/File[deflate.conf]/ensure: defined content as '{md5}a045d750d819b1e9dae3fbfb3f20edd5'", > "Notice: /Stage[main]/Apache::Mod::Dir/File[dir.conf]/ensure: defined content as '{md5}c741d8ea840e6eb999d739eed47c69d7'", > "Notice: /Stage[main]/Apache::Mod::Negotiation/File[negotiation.conf]/ensure: defined content as '{md5}47284b5580b986a6ba32580b6ffb9fd7'", > "Notice: /Stage[main]/Apache::Mod::Setenvif/File[setenvif.conf]/ensure: defined content as '{md5}c7ede4173da1915b7ec088201f030c28'", > "Notice: /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.modules.d/prefork.conf]/ensure: defined content as '{md5}f58b0483b70b4e73b5f67ff37b8f24a0'", > "Notice: /Stage[main]/Apache::Mod::Status/File[status.conf]/ensure: defined content as '{md5}fa95c477a2085c1f7f17ee5f8eccfb90'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Gnocchi::Db/Gnocchi_config[indexer/url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Gnocchi_config[api/max_limit]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Gnocchi_config[api/auth_mode]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage/Gnocchi_config[storage/coordination_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Incoming::Redis/Gnocchi_config[incoming/driver]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Incoming::Redis/Gnocchi_config[incoming/redis_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/driver]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_username]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_keyring]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_pool]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_conffile]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Metricd/Gnocchi_config[metricd/workers]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Metricd/Gnocchi_config[metricd/metric_processing_delay]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/resource_id]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/archive_policy_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/flush_delay]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Logging/Oslo::Log[gnocchi_config]/Gnocchi_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Logging/Oslo::Log[gnocchi_config]/Gnocchi_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Cors/Oslo::Cors[gnocchi_config]/Gnocchi_config[cors/expose_headers]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Cors/Oslo::Cors[gnocchi_config]/Gnocchi_config[cors/max_age]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Cors/Oslo::Cors[gnocchi_config]/Gnocchi_config[cors/allow_methods]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Cors/Oslo::Cors[gnocchi_config]/Gnocchi_config[cors/allow_headers]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Policy/Oslo::Policy[gnocchi_config]/Gnocchi_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Oslo::Middleware[gnocchi_config]/Gnocchi_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}6aa3780bbc1fc219fc58070c47cfc894'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content changed '{md5}c6d1bc1fdbcb93bbd2596e4703f4108c' to '{md5}3bd0015a5b258bebc53d757643b45830'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: defined content as '{md5}785d35cb285e190d589163b45263ca89'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: defined content as '{md5}26e5d44aae258b3e9d821cbbbd3e2826'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: defined content as '{md5}0e8468ecc1265f8947b8725f4d1be9c0'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: defined content as '{md5}d1045f54d2798499ca0f030ca0eef920'", > "Notice: /Stage[main]/Apache::Mod::Actions/Apache::Mod[actions]/File[actions.load]/ensure: defined content as '{md5}599866dfaf734f60f7e2d41ee8235515'", > "Notice: /Stage[main]/Apache::Mod::Authn_core/Apache::Mod[authn_core]/File[authn_core.load]/ensure: defined content as '{md5}704d6e8b02b0eca0eba4083960d16c52'", > "Notice: /Stage[main]/Apache::Mod::Cache/Apache::Mod[cache]/File[cache.load]/ensure: defined content as '{md5}01e4d392225b518a65b0f7d6c4e21d29'", > "Notice: /Stage[main]/Apache::Mod::Ext_filter/Apache::Mod[ext_filter]/File[ext_filter.load]/ensure: defined content as '{md5}76d5e0ac3411a4be57ac33ebe2e52ac8'", > "Notice: /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: defined content as '{md5}e36257b9efab01459141d423cae57c7c'", > "Notice: /Stage[main]/Apache::Mod::Mime_magic/Apache::Mod[mime_magic]/File[mime_magic.load]/ensure: defined content as '{md5}cb8670bb2fb352aac7ebf3a85d52094c'", > "Notice: /Stage[main]/Apache::Mod::Rewrite/Apache::Mod[rewrite]/File[rewrite.load]/ensure: defined content as '{md5}26e2683352fc1599f29573ff0d934e79'", > "Notice: /Stage[main]/Apache::Mod::Speling/Apache::Mod[speling]/File[speling.load]/ensure: defined content as '{md5}f82e9e6b871a276c324c9eeffcec8a61'", > "Notice: /Stage[main]/Apache::Mod::Suexec/Apache::Mod[suexec]/File[suexec.load]/ensure: defined content as '{md5}c7d5c61c534ba423a79b0ae78ff9be35'", > "Notice: /Stage[main]/Apache::Mod::Version/Apache::Mod[version]/File[version.load]/ensure: defined content as '{md5}1c9243de22ace4dc8266442c48ae0c92'", > "Notice: /Stage[main]/Apache::Mod::Vhost_alias/Apache::Mod[vhost_alias]/File[vhost_alias.load]/ensure: defined content as '{md5}eca907865997d50d5130497665c3f82e'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[auth_digest]/File[auth_digest.load]/ensure: defined content as '{md5}df9e85f8da0b239fe8e698ae7ead4f60'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authn_anon]/File[authn_anon.load]/ensure: defined content as '{md5}bf57b94b5aec35476fc2a2dc3861f132'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authn_dbm]/File[authn_dbm.load]/ensure: defined content as '{md5}90ee8f8ef1a017cacadfda4225e10651'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_dbm]/File[authz_dbm.load]/ensure: defined content as '{md5}c1363277984d22f99b70f7dce8753b60'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_owner]/File[authz_owner.load]/ensure: defined content as '{md5}f30a9be1016df87f195449d9e02d1857'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[expires]/File[expires.load]/ensure: defined content as '{md5}f0825bad1e470de86ffabeb86dcc5d95'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[include]/File[include.load]/ensure: defined content as '{md5}88095a914eedc3c2c184dd5d74c3954c'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[logio]/File[logio.load]/ensure: defined content as '{md5}084533c7a44e9129d0e6df952e2472b6'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[substitute]/File[substitute.load]/ensure: defined content as '{md5}8077c34a71afcf41c8fc644830935915'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[usertrack]/File[usertrack.load]/ensure: defined content as '{md5}e95fbbf030fabec98b948f8dc217775c'", > "Notice: /Stage[main]/Apache::Mod::Alias/Apache::Mod[alias]/File[alias.load]/ensure: defined content as '{md5}3cf2fa309ccae4c29a4b875d0894cd79'", > "Notice: /Stage[main]/Apache::Mod::Authn_file/Apache::Mod[authn_file]/File[authn_file.load]/ensure: defined content as '{md5}d41656680003d7b890267bb73621c60b'", > "Notice: /Stage[main]/Apache::Mod::Autoindex/Apache::Mod[autoindex]/File[autoindex.load]/ensure: defined content as '{md5}515cdf5b573e961a60d2931d39248648'", > "Notice: /Stage[main]/Apache::Mod::Dav/Apache::Mod[dav]/File[dav.load]/ensure: defined content as '{md5}588e496251838c4840c14b28b5aa7881'", > "Notice: /Stage[main]/Apache::Mod::Dav_fs/File[dav_fs.conf]/ensure: defined content as '{md5}899a57534f3d84efa81887ec93c90c9b'", > "Notice: /Stage[main]/Apache::Mod::Dav_fs/Apache::Mod[dav_fs]/File[dav_fs.load]/ensure: defined content as '{md5}2996277c73b1cd684a9a3111c355e0d3'", > "Notice: /Stage[main]/Apache::Mod::Deflate/Apache::Mod[deflate]/File[deflate.load]/ensure: defined content as '{md5}2d1a1afcae0c70557251829a8586eeaf'", > "Notice: /Stage[main]/Apache::Mod::Dir/Apache::Mod[dir]/File[dir.load]/ensure: defined content as '{md5}1bfb1c2a46d7351fc9eb47c659dee068'", > "Notice: /Stage[main]/Apache::Mod::Negotiation/Apache::Mod[negotiation]/File[negotiation.load]/ensure: defined content as '{md5}d262ee6a5f20d9dd7f87770638dc2ccd'", > "Notice: /Stage[main]/Apache::Mod::Setenvif/Apache::Mod[setenvif]/File[setenvif.load]/ensure: defined content as '{md5}ec6c99f7cc8e35bdbcf8028f652c9f6d'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[auth_basic]/File[auth_basic.load]/ensure: defined content as '{md5}494bcf4b843f7908675d663d8dc1bdc8'", > "Notice: /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: defined content as '{md5}66a1e2064a140c3e7dca7ac33877700e'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: defined content as '{md5}39942569bff2abdb259f9a347c7246bc'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[access_compat]/File[access_compat.load]/ensure: defined content as '{md5}d5feb88bec4570e2dbc41cce7e0de003'", > "Notice: /Stage[main]/Apache::Mod::Authz_user/Apache::Mod[authz_user]/File[authz_user.load]/ensure: defined content as '{md5}63594303ee808423679b1ea13dd5a784'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_groupfile]/File[authz_groupfile.load]/ensure: defined content as '{md5}ae005a36b3ac8c20af36c434561c8a75'", > "Notice: /Stage[main]/Apache::Mod::Env/Apache::Mod[env]/File[env.load]/ensure: defined content as '{md5}d74184d40d0ee24ba02626a188ee7e1a'", > "Notice: /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.modules.d/prefork.load]/ensure: defined content as '{md5}157529aafcf03fa491bc924103e4608e'", > "Notice: /Stage[main]/Apache::Mod::Cgi/Apache::Mod[cgi]/File[cgi.load]/ensure: defined content as '{md5}ac20c5c5779b37ab06b480d6485a0881'", > "Notice: /Stage[main]/Apache::Mod::Status/Apache::Mod[status]/File[status.load]/ensure: defined content as '{md5}c7726ef20347ef9a06ef68eeaad79765'", > "Notice: /Stage[main]/Apache::Mod::Ssl/Apache::Mod[ssl]/File[ssl.load]/ensure: defined content as '{md5}e282ac9f82fe5538692a4de3616fb695'", > "Notice: /Stage[main]/Apache::Mod::Socache_shmcb/Apache::Mod[socache_shmcb]/File[socache_shmcb.load]/ensure: defined content as '{md5}ab31a6ea611785f74851b578572e4157'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Apache/Systemd::Dropin_file[httpd.conf]/File[/etc/systemd/system/httpd.service.d]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Apache/Systemd::Dropin_file[httpd.conf]/File[/etc/systemd/system/httpd.service.d/httpd.conf]/ensure: defined content as '{md5}c44e90292b030f86c3b82096b68fe9cc'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/autoindex.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: removed", > "Notice: /Stage[main]/Apache::Mod::Ssl/File[ssl.conf]/content: content changed '{md5}9e163ce201541f8aa36fcc1a372ed34d' to '{md5}b6f6f2773db25c777f1db887e7a3f57d'", > "Notice: /Stage[main]/Apache::Mod::Wsgi/File[wsgi.conf]/ensure: defined content as '{md5}8b3feb3fc2563de439920bb2c52cbd11'", > "Notice: /Stage[main]/Apache::Mod::Wsgi/Apache::Mod[wsgi]/File[wsgi.load]/ensure: defined content as '{md5}e1795e051e7aae1f865fde0d3b86a507'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-base.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-dav.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-lua.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-mpm.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-proxy.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-ssl.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-systemd.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/01-cgi.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-wsgi.conf]/ensure: removed", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/File[/var/www/cgi-bin/gnocchi]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/File[gnocchi_wsgi]/ensure: defined content as '{md5}1001349fa771bd31f137b23418ebcced'", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/Apache::Vhost[gnocchi_wsgi]/Concat[10-gnocchi_wsgi.conf]/File[/etc/httpd/conf.d/10-gnocchi_wsgi.conf]/ensure: defined content as '{md5}edc27d1c550fa1c797ff05266918b558'", > "Notice: Applied catalog in 1.17 seconds", > " Total: 114", > " Success: 114", > " Changed: 114", > " Out of sync: 114", > " Total: 261", > " Skipped: 43", > " Concat file: 0.00", > " Anchor: 0.00", > " Concat fragment: 0.00", > " Augeas: 0.02", > " Gnocchi config: 0.20", > " File: 0.31", > " Config retrieval: 5.01", > " Total: 5.54", > " Resources: 0.00", > "Gathering files modified after 2018-09-21 12:23:01.416785765 +0000", > "2018-09-21 12:23:16,438 DEBUG: 28582 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config'", > "+ origin_of_time=/var/lib/config-data/gnocchi.origin_of_time", > "+ touch /var/lib/config-data/gnocchi.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config /etc/config.pp", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/gnocchi/manifests/db.pp\", 26]:[\"/etc/puppet/modules/gnocchi/manifests/init.pp\", 54]", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/gnocchi/manifests/config.pp\", 29]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/gnocchi.pp\", 31]", > "Warning: Scope(Class[Gnocchi::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: ModuleLoader: module 'openstacklib' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/gnocchi", > "++ stat -c %y /var/lib/config-data/gnocchi.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:23:01.416785765 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/gnocchi", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/gnocchi", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/gnocchi.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/gnocchi", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/gnocchi --mtime=1970-01-01", > "2018-09-21 12:23:16,438 INFO: 28582 -- Removing container: docker-puppet-gnocchi", > "2018-09-21 12:23:16,512 DEBUG: 28582 -- docker-puppet-gnocchi", > "2018-09-21 12:23:16,513 INFO: 28582 -- Finished processing puppet configs for gnocchi", > "2018-09-21 12:23:16,513 INFO: 28582 -- Starting configuration of clustercheck using image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", > "2018-09-21 12:23:16,513 DEBUG: 28582 -- config_volume clustercheck", > "2018-09-21 12:23:16,513 DEBUG: 28582 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-09-21 12:23:16,513 DEBUG: 28582 -- manifest include ::tripleo::profile::pacemaker::clustercheck", > "2018-09-21 12:23:16,513 DEBUG: 28582 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", > "2018-09-21 12:23:16,513 DEBUG: 28582 -- volumes []", > "2018-09-21 12:23:16,515 INFO: 28582 -- Removing container: docker-puppet-clustercheck", > "2018-09-21 12:23:16,597 INFO: 28582 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", > "2018-09-21 12:23:18,224 DEBUG: 28581 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-sahara-api ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-sahara-api", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "86a0e618a180: Already exists", > "dfa58d50e0a3: Already exists", > "e9b7a8f97fff: Pulling fs layer", > "c272c2900b36: Pulling fs layer", > "c272c2900b36: Verifying Checksum", > "c272c2900b36: Download complete", > "e9b7a8f97fff: Verifying Checksum", > "e9b7a8f97fff: Download complete", > "e9b7a8f97fff: Pull complete", > "c272c2900b36: Pull complete", > "Digest: sha256:e5a352cca14a7335ae224ec9e1b3fb1e4b1fea15135c79e9120df0a1aa1a5d10", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1", > "2018-09-21 12:23:18,228 DEBUG: 28581 -- NET_HOST enabled", > "2018-09-21 12:23:18,228 DEBUG: 28581 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-sahara --env PUPPET_TAGS=file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template --env NAME=sahara --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpqMQ2Ts:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1", > "2018-09-21 12:23:23,412 DEBUG: 28582 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-mariadb ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-mariadb", > "74e151b94b2d: Pulling fs layer", > "74e151b94b2d: Verifying Checksum", > "74e151b94b2d: Download complete", > "74e151b94b2d: Pull complete", > "Digest: sha256:e169866bbcd9fff793be5f76655521e084cda4dd809dc854c4e6673852e12ed9", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", > "2018-09-21 12:23:23,415 DEBUG: 28582 -- NET_HOST enabled", > "2018-09-21 12:23:23,415 DEBUG: 28582 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-clustercheck --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=clustercheck --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpmoJZJv:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", > "2018-09-21 12:23:26,061 DEBUG: 28580 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 5.07 seconds", > "Notice: /Stage[main]/Nova::Db/Nova_config[api_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Nova_config[placement_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[glance/api_servers]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[api/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/image_service]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[cinder/catalog_info]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[os_vif_linux_bridge/use_ipv6]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_api_faults]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/os_interface]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/memcached_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}7b62d8a48d6fff2330120c522ed1a05f'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/File[/etc/httpd/conf.d/00-nova-placement-api.conf]/content: content changed '{md5}611e31d39e1635bfabc0aafc51b43d0b' to '{md5}612d455490cfecc4b51db6656ea39240'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/File[/var/www/cgi-bin/nova]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/File[placement_wsgi]/ensure: defined content as '{md5}2c992c50344eb1765282cb9fb70126db'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/Apache::Vhost[placement_wsgi]/Concat[10-placement_wsgi.conf]/File[/etc/httpd/conf.d/10-placement_wsgi.conf]/ensure: defined content as '{md5}437386bc23ab26fee354534fb1d240ab'", > "Notice: Applied catalog in 8.13 seconds", > " Total: 133", > " Success: 133", > " Changed: 133", > " Out of sync: 133", > " Total: 376", > " Skipped: 39", > " Augeas: 0.03", > " Package: 0.13", > " File: 0.53", > " Total: 13.08", > " Last run: 1537532603", > " Config retrieval: 5.69", > " Nova config: 6.70", > " Config: 1537532589", > "Gathering files modified after 2018-09-21 12:23:03.393864634 +0000", > "2018-09-21 12:23:26,061 DEBUG: 28580 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config'", > "+ origin_of_time=/var/lib/config-data/nova_placement.origin_of_time", > "+ touch /var/lib/config-data/nova_placement.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config /etc/config.pp", > "PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5)", > "PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)", > "ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/placement.pp\", 62]", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/config.pp\", 37]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 114]", > "Warning: Scope(Class[Nova::Db]): placement_database_connection has no effect as of pike, and may be removed in a future release", > "Warning: Scope(Class[Nova::Db]): placement_slave_connection has no effect as of pike, and may be removed in a future release", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/db.pp\", 126]:[\"/etc/puppet/modules/nova/manifests/init.pp\", 551]", > "Warning: Scope(Class[Nova]): nova::use_syslog, nova::use_stderr, nova::log_facility, nova::log_dir \\", > "and nova::debug is deprecated and has been moved to nova::logging class, please set them there.", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 561]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/placement.pp\", 62]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/placement.pp\", 101]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 138]", > "Warning: Scope(Class[Nova::Placement]): The os_region_name parameter is deprecated and will be removed \\", > "in a future release. Please use region_name instead.", > "Warning: Scope(Class[Nova::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/nova_placement", > "++ stat -c %y /var/lib/config-data/nova_placement.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:23:03.393864634 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova_placement", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova_placement", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/nova_placement.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/nova_placement", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/nova_placement --mtime=1970-01-01", > "2018-09-21 12:23:26,061 INFO: 28580 -- Removing container: docker-puppet-nova_placement", > "2018-09-21 12:23:26,115 DEBUG: 28580 -- docker-puppet-nova_placement", > "2018-09-21 12:23:26,116 INFO: 28580 -- Finished processing puppet configs for nova_placement", > "2018-09-21 12:23:26,116 INFO: 28580 -- Starting configuration of aodh using image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1", > "2018-09-21 12:23:26,116 DEBUG: 28580 -- config_volume aodh", > "2018-09-21 12:23:26,116 DEBUG: 28580 -- puppet_tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config", > "2018-09-21 12:23:26,116 DEBUG: 28580 -- manifest include tripleo::profile::base::aodh::api", > "include tripleo::profile::base::aodh::evaluator", > "include tripleo::profile::base::aodh::listener", > "include tripleo::profile::base::aodh::notifier", > "2018-09-21 12:23:26,116 DEBUG: 28580 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1", > "2018-09-21 12:23:26,116 DEBUG: 28580 -- volumes []", > "2018-09-21 12:23:26,118 INFO: 28580 -- Removing container: docker-puppet-aodh", > "2018-09-21 12:23:26,183 INFO: 28580 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1", > "2018-09-21 12:23:28,328 DEBUG: 28580 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-aodh-api ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-aodh-api", > "1877222ec238: Pulling fs layer", > "7819d346e6fb: Pulling fs layer", > "7819d346e6fb: Verifying Checksum", > "7819d346e6fb: Download complete", > "1877222ec238: Verifying Checksum", > "1877222ec238: Download complete", > "1877222ec238: Pull complete", > "7819d346e6fb: Pull complete", > "Digest: sha256:004446e61761dfc3be945ce880a7facc5d43cf92337189abd7108c4631eba4cd", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1", > "2018-09-21 12:23:28,331 DEBUG: 28580 -- NET_HOST enabled", > "2018-09-21 12:23:28,331 DEBUG: 28580 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-aodh --env PUPPET_TAGS=file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config --env NAME=aodh --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpKvy9Sb:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1", > "2018-09-21 12:23:30,863 DEBUG: 28581 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.28 seconds", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/plugins]/ensure: created", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/port]/ensure: created", > "Notice: /Stage[main]/Sahara::Service::Api/Sahara_config[DEFAULT/api_workers]/ensure: created", > "Notice: /Stage[main]/Sahara::Logging/Oslo::Log[sahara_config]/Sahara_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Sahara::Logging/Oslo::Log[sahara_config]/Sahara_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Sahara::Policy/Oslo::Policy[sahara_config]/Sahara_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Default[sahara_config]/Sahara_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Rabbit[sahara_config]/Sahara_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Zmq[sahara_config]/Sahara_config[DEFAULT/rpc_zmq_host]/ensure: created", > "Notice: /Stage[main]/Sahara::Notify/Oslo::Messaging::Notifications[sahara_config]/Sahara_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Sahara::Notify/Oslo::Messaging::Notifications[sahara_config]/Sahara_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: Applied catalog in 1.35 seconds", > " Total: 25", > " Success: 25", > " Total: 197", > " Skipped: 23", > " Out of sync: 25", > " Changed: 25", > " File: 0.00", > " Augeas: 0.01", > " Package: 0.05", > " Sahara config: 1.08", > " Last run: 1537532609", > " Config retrieval: 2.57", > " Total: 3.72", > " Config: 1537532605", > "Gathering files modified after 2018-09-21 12:23:18.455445889 +0000", > "2018-09-21 12:23:30,863 DEBUG: 28581 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template'", > "+ origin_of_time=/var/lib/config-data/sahara.origin_of_time", > "+ touch /var/lib/config-data/sahara.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template /etc/config.pp", > "Warning: ModuleLoader: module 'sahara' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/sahara/manifests/db.pp\", 69]:[\"/etc/puppet/modules/sahara/manifests/init.pp\", 380]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/sahara/manifests/policy.pp\", 34]:[\"/etc/puppet/modules/sahara/manifests/init.pp\", 381]", > "Warning: Scope(Class[Sahara]): The use_neutron parameter has been deprecated and will be removed in the future release.", > "Warning: Scope(Class[Sahara]): sahara::admin_user, sahara::admin_password, sahara::auth_uri, sahara::identity_uri, sahara::admin_tenant_name and sahara::memcached_servers are deprecated. Please use sahara::keystone::authtoken::* parameters instead.", > "Warning: Scope(Class[Sahara::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/sahara", > "++ stat -c %y /var/lib/config-data/sahara.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:23:18.455445889 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/sahara", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/sahara", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/sahara.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/sahara", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/sahara --mtime=1970-01-01", > "2018-09-21 12:23:30,864 INFO: 28581 -- Removing container: docker-puppet-sahara", > "2018-09-21 12:23:30,902 DEBUG: 28581 -- docker-puppet-sahara", > "2018-09-21 12:23:30,903 INFO: 28581 -- Finished processing puppet configs for sahara", > "2018-09-21 12:23:30,903 INFO: 28581 -- Starting configuration of mysql using image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", > "2018-09-21 12:23:30,903 DEBUG: 28581 -- config_volume mysql", > "2018-09-21 12:23:30,903 DEBUG: 28581 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-09-21 12:23:30,903 DEBUG: 28581 -- manifest ['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }", > "2018-09-21 12:23:30,903 DEBUG: 28581 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", > "2018-09-21 12:23:30,903 DEBUG: 28581 -- volumes []", > "2018-09-21 12:23:30,904 INFO: 28581 -- Removing container: docker-puppet-mysql", > "2018-09-21 12:23:30,952 INFO: 28581 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", > "2018-09-21 12:23:30,955 DEBUG: 28581 -- NET_HOST enabled", > "2018-09-21 12:23:30,955 DEBUG: 28581 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-mysql --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=mysql --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp8meHyO:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", > "2018-09-21 12:23:32,123 DEBUG: 28582 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.49 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Clustercheck/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{md5}b12a3becf61cd91d8df58bca472b3ec0'", > "Notice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/content: content changed '{md5}9ff8cc688dd9f0dfc45e5afd25c427a7' to '{md5}7d37008224e71625019cb48768f267e7'", > "Notice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/mode: mode changed '0600' to '0644'", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Clustercheck/Xinetd::Service[galera-monitor]/File[/etc/xinetd.d/galera-monitor]/ensure: defined content as '{md5}091526dde6e6d916fa3ccf5773fdc55e'", > "Notice: Applied catalog in 0.09 seconds", > " Total: 4", > " Success: 4", > " Total: 13", > " Out of sync: 3", > " Changed: 3", > " Skipped: 9", > " File: 0.07", > " Config retrieval: 0.56", > " Total: 0.63", > " Last run: 1537532610", > " Config: 1537532610", > "Gathering files modified after 2018-09-21 12:23:23.611637154 +0000", > "2018-09-21 12:23:32,123 DEBUG: 28582 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,file ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,file'", > "+ origin_of_time=/var/lib/config-data/clustercheck.origin_of_time", > "+ touch /var/lib/config-data/clustercheck.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,file /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/clustercheck", > "++ stat -c %y /var/lib/config-data/clustercheck.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:23:23.611637154 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/clustercheck", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/clustercheck", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/clustercheck.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/clustercheck", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/clustercheck --mtime=1970-01-01", > "2018-09-21 12:23:32,123 INFO: 28582 -- Removing container: docker-puppet-clustercheck", > "2018-09-21 12:23:32,172 DEBUG: 28582 -- docker-puppet-clustercheck", > "2018-09-21 12:23:32,172 INFO: 28582 -- Finished processing puppet configs for clustercheck", > "2018-09-21 12:23:32,172 INFO: 28582 -- Starting configuration of redis using image 192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1", > "2018-09-21 12:23:32,172 DEBUG: 28582 -- config_volume redis", > "2018-09-21 12:23:32,172 DEBUG: 28582 -- puppet_tags file,file_line,concat,augeas,cron,exec", > "2018-09-21 12:23:32,172 DEBUG: 28582 -- manifest include ::tripleo::profile::pacemaker::database::redis_bundle", > "2018-09-21 12:23:32,172 DEBUG: 28582 -- config_image 192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1", > "2018-09-21 12:23:32,172 DEBUG: 28582 -- volumes []", > "2018-09-21 12:23:32,174 INFO: 28582 -- Removing container: docker-puppet-redis", > "2018-09-21 12:23:32,246 INFO: 28582 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1", > "2018-09-21 12:23:35,914 DEBUG: 28582 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-redis ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-redis", > "0fa6e671bf11: Pulling fs layer", > "acc8da91de83: Pulling fs layer", > "0fa6e671bf11: Download complete", > "0fa6e671bf11: Pull complete", > "acc8da91de83: Verifying Checksum", > "acc8da91de83: Download complete", > "acc8da91de83: Pull complete", > "Digest: sha256:f8f01bc2303ecd063fc5f9338cc597a9aa2cc066f4f87fb1840ddb97fad379a1", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1", > "2018-09-21 12:23:35,917 DEBUG: 28582 -- NET_HOST enabled", > "2018-09-21 12:23:35,917 DEBUG: 28582 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-redis --env PUPPET_TAGS=file,file_line,concat,augeas,cron,exec --env NAME=redis --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpXSVy2K:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1", > "2018-09-21 12:23:43,723 DEBUG: 28580 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.16 seconds", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Aodh_config[api/gnocchi_external_project_owner]/ensure: created", > "Notice: /Stage[main]/Aodh::Evaluator/Aodh_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Db/Oslo::Db[aodh_config]/Aodh_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Aodh::Logging/Oslo::Log[aodh_config]/Aodh_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Aodh::Logging/Oslo::Log[aodh_config]/Aodh_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Rabbit[aodh_config]/Aodh_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Default[aodh_config]/Aodh_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Notifications[aodh_config]/Aodh_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Notifications[aodh_config]/Aodh_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Policy/Oslo::Policy[aodh_config]/Aodh_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Oslo::Middleware[aodh_config]/Aodh_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}90300e8c0a2fabc90a0c20348ed8f16b'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[/var/www/cgi-bin/aodh]/owner: owner changed 'root' to 'aodh'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[/var/www/cgi-bin/aodh]/group: group changed 'root' to 'aodh'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[aodh_wsgi]/ensure: defined content as '{md5}09d823939c45501c11f2096289fe70cf'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/Apache::Vhost[aodh_wsgi]/Concat[10-aodh_wsgi.conf]/File[/etc/httpd/conf.d/10-aodh_wsgi.conf]/ensure: defined content as '{md5}fa9116220a4d421c43bcbeeabcee4930'", > "Notice: Applied catalog in 1.75 seconds", > " Total: 110", > " Success: 110", > " Changed: 109", > " Out of sync: 109", > " Total: 329", > " Skipped: 40", > " File: 0.25", > " Aodh config: 0.77", > " Last run: 1537532621", > " Config retrieval: 4.66", > " Total: 5.74", > " Config: 1537532615", > "Gathering files modified after 2018-09-21 12:23:28.538816360 +0000", > "2018-09-21 12:23:43,724 DEBUG: 28580 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config'", > "+ origin_of_time=/var/lib/config-data/aodh.origin_of_time", > "+ touch /var/lib/config-data/aodh.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config /etc/config.pp", > "Warning: Unknown variable: 'undef'. at /etc/puppet/modules/aodh/manifests/init.pp:290:41", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/aodh/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/aodh.pp\", 123]", > "Warning: Scope(Class[Aodh::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Scope(Class[Aodh::Api]): host has no effect as of Newton and will be removed in a future \\", > "release. aodh::wsgi::apache supports setting a host via bind_host.", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/oslo/manifests/db.pp\", 132]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/aodh", > "++ stat -c %y /var/lib/config-data/aodh.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:23:28.538816360 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/aodh", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/aodh", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/aodh.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/aodh", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/aodh --mtime=1970-01-01", > "2018-09-21 12:23:43,724 INFO: 28580 -- Removing container: docker-puppet-aodh", > "2018-09-21 12:23:43,770 DEBUG: 28580 -- docker-puppet-aodh", > "2018-09-21 12:23:43,771 INFO: 28580 -- Finished processing puppet configs for aodh", > "2018-09-21 12:23:43,771 INFO: 28580 -- Starting configuration of heat_api using image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1", > "2018-09-21 12:23:43,771 DEBUG: 28580 -- config_volume heat_api", > "2018-09-21 12:23:43,771 DEBUG: 28580 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-09-21 12:23:43,771 DEBUG: 28580 -- manifest include ::tripleo::profile::base::heat::api", > "2018-09-21 12:23:43,771 DEBUG: 28580 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1", > "2018-09-21 12:23:43,771 DEBUG: 28580 -- volumes []", > "2018-09-21 12:23:43,773 INFO: 28580 -- Removing container: docker-puppet-heat_api", > "2018-09-21 12:23:43,842 INFO: 28580 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1", > "2018-09-21 12:23:44,764 DEBUG: 28581 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.49 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/ensure: defined content as '{md5}45315a4298fe7ee61818e38c304b810f'", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{md5}0df3f6bc676cf9b7c80a6b9d1de45820'", > "Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content: content changed '{md5}af90358207ccfecae7af249d5ef7dd3e' to '{md5}8c6cfba441ae40b019726afa035445cd'", > "Notice: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]/ensure: created", > "Notice: Applied catalog in 0.40 seconds", > " Skipped: 225", > " Total: 230", > " Out of sync: 4", > " Changed: 4", > " File: 0.03", > " Last run: 1537532623", > " Config retrieval: 4.88", > " Total: 4.91", > " Config: 1537532617", > "Gathering files modified after 2018-09-21 12:23:31.217912370 +0000", > "2018-09-21 12:23:44,764 DEBUG: 28581 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/mysql.origin_of_time", > "+ touch /var/lib/config-data/mysql.origin_of_time", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/pacemaker/database/mysql_bundle.pp\", 133]:[\"/etc/config.pp\", 4]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 103]:[\"/etc/config.pp\", 4]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/aodh/manifests/db/mysql.pp\", 57]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 175]", > "Warning: ModuleLoader: module 'cinder' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/openstacklib/manifests/db/mysql/host_access.pp\", 43]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/mysql", > "++ stat -c %y /var/lib/config-data/mysql.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:23:31.217912370 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/mysql", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/mysql", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/mysql.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/mysql", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/mysql --mtime=1970-01-01", > "2018-09-21 12:23:44,764 INFO: 28581 -- Removing container: docker-puppet-mysql", > "2018-09-21 12:23:44,809 DEBUG: 28581 -- docker-puppet-mysql", > "2018-09-21 12:23:44,809 INFO: 28581 -- Finished processing puppet configs for mysql", > "2018-09-21 12:23:44,809 INFO: 28581 -- Starting configuration of nova using image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", > "2018-09-21 12:23:44,809 DEBUG: 28581 -- config_volume nova", > "2018-09-21 12:23:44,809 DEBUG: 28581 -- puppet_tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config", > "2018-09-21 12:23:44,810 DEBUG: 28581 -- manifest ['Nova_cell_v2'].each |String $val| { noop_resource($val) }", > "include tripleo::profile::base::nova::conductor", > "include tripleo::profile::base::nova::consoleauth", > "include tripleo::profile::base::nova::scheduler", > "include tripleo::profile::base::nova::vncproxy", > "2018-09-21 12:23:44,810 DEBUG: 28581 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", > "2018-09-21 12:23:44,810 DEBUG: 28581 -- volumes []", > "2018-09-21 12:23:44,811 INFO: 28581 -- Removing container: docker-puppet-nova", > "2018-09-21 12:23:44,880 INFO: 28581 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", > "2018-09-21 12:23:44,993 DEBUG: 28582 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.97 seconds", > "Notice: /Stage[main]/Redis::Config/File[/etc/redis]/ensure: created", > "Notice: /Stage[main]/Redis::Config/File[/var/log/redis]/mode: mode changed '0750' to '0755'", > "Notice: /Stage[main]/Redis::Config/File[/var/lib/redis]/mode: mode changed '0750' to '0755'", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/security/limits.d/redis.conf]/ensure: defined content as '{md5}a2f723773964f5ea42b6c7c5d6b72208'", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/systemd/system/redis.service.d/limit.conf]/mode: mode changed '0644' to '0444'", > "Notice: /Stage[main]/Redis::Config/Redis::Instance[default]/File[/etc/redis.conf.puppet]/ensure: defined content as '{md5}97b1dbb27fe87d869eb0f6421f1b8aff'", > "Notice: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]: Triggered 'refresh' from 1 events", > "Notice: Applied catalog in 0.07 seconds", > " Total: 6", > " Success: 6", > " Restarted: 1", > " Skipped: 11", > " Total: 21", > " Out of sync: 6", > " Changed: 6", > " Exec: 0.00", > " Config retrieval: 1.09", > " Total: 1.12", > " Config: 1537532622", > "Gathering files modified after 2018-09-21 12:23:36.113085220 +0000", > "2018-09-21 12:23:44,993 DEBUG: 28582 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,exec ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,exec'", > "+ origin_of_time=/var/lib/config-data/redis.origin_of_time", > "+ touch /var/lib/config-data/redis.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,exec /etc/config.pp", > "Warning: ModuleLoader: module 'redis' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/redis", > "++ stat -c %y /var/lib/config-data/redis.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:23:36.113085220 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/redis", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/redis", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/redis.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/redis", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/redis --mtime=1970-01-01", > "2018-09-21 12:23:44,993 INFO: 28582 -- Removing container: docker-puppet-redis", > "2018-09-21 12:23:45,030 DEBUG: 28582 -- docker-puppet-redis", > "2018-09-21 12:23:45,030 INFO: 28582 -- Finished processing puppet configs for redis", > "2018-09-21 12:23:45,031 INFO: 28582 -- Starting configuration of keystone using image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", > "2018-09-21 12:23:45,031 DEBUG: 28582 -- config_volume keystone", > "2018-09-21 12:23:45,031 DEBUG: 28582 -- puppet_tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config", > "2018-09-21 12:23:45,031 DEBUG: 28582 -- manifest ['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }", > "2018-09-21 12:23:45,031 DEBUG: 28582 -- config_image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", > "2018-09-21 12:23:45,031 DEBUG: 28582 -- volumes []", > "2018-09-21 12:23:45,032 INFO: 28582 -- Removing container: docker-puppet-keystone", > "2018-09-21 12:23:45,108 INFO: 28582 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", > "2018-09-21 12:23:46,088 DEBUG: 28580 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-heat-api ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-heat-api", > "c59832cf029f: Pulling fs layer", > "9fac667574ff: Pulling fs layer", > "9fac667574ff: Verifying Checksum", > "9fac667574ff: Download complete", > "c59832cf029f: Verifying Checksum", > "c59832cf029f: Download complete", > "c59832cf029f: Pull complete", > "9fac667574ff: Pull complete", > "Digest: sha256:e741a656885c01e16e1c62629eb57b73d1fd1d6057ec9938ec0943f6add881bb", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1", > "2018-09-21 12:23:46,092 DEBUG: 28580 -- NET_HOST enabled", > "2018-09-21 12:23:46,092 DEBUG: 28580 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat_api --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat_api --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpOyosVG:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1", > "2018-09-21 12:23:47,847 DEBUG: 28582 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-keystone ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-keystone", > "0dff9c680f5a: Pulling fs layer", > "43a28506a8cb: Pulling fs layer", > "43a28506a8cb: Verifying Checksum", > "43a28506a8cb: Download complete", > "0dff9c680f5a: Verifying Checksum", > "0dff9c680f5a: Download complete", > "0dff9c680f5a: Pull complete", > "43a28506a8cb: Pull complete", > "Digest: sha256:c9877971cf6754193162a347aa1366e649899abf07e616c51e997f986c315282", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", > "2018-09-21 12:23:47,851 DEBUG: 28582 -- NET_HOST enabled", > "2018-09-21 12:23:47,851 DEBUG: 28582 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-keystone --env PUPPET_TAGS=file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config --env NAME=keystone --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpvMQQFe:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", > "2018-09-21 12:23:48,343 DEBUG: 28581 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-api ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-api", > "2d54cceaa5bd: Already exists", > "d1293e07510e: Pulling fs layer", > "d1293e07510e: Verifying Checksum", > "d1293e07510e: Download complete", > "d1293e07510e: Pull complete", > "Digest: sha256:88f19d0f084ebd24a1b6ff72ae010d604f066050d1d415f5ceaff552ab0fbcbd", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", > "2018-09-21 12:23:48,347 DEBUG: 28581 -- NET_HOST enabled", > "2018-09-21 12:23:48,347 DEBUG: 28581 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config --env NAME=nova --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp4wxs6K:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", > "2018-09-21 12:24:02,940 DEBUG: 28580 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.30 seconds", > "Notice: /Stage[main]/Heat::Cron::Purge_deleted/Cron[heat-manage purge_deleted]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin_password]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/auth_type]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/auth_url]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/username]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/password]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[clients_keystone/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[DEFAULT/max_json_body_size]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[ec2authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[yaql/limit_iterators]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[yaql/memory_quota]/ensure: created", > "Notice: /Stage[main]/Heat::Api/Heat_config[heat_api/bind_host]/ensure: created", > "Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/rpc_response_timeout]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Middleware[heat_config]/Heat_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/expose_headers]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/max_age]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/allow_headers]/ensure: created", > "Notice: /Stage[main]/Heat::Policy/Oslo::Policy[heat_config]/Heat_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}8d670250beac8e80defdb1f727ade745'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/File[/var/www/cgi-bin/heat]/ensure: created", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/File[heat_api_wsgi]/ensure: defined content as '{md5}640891728ce5d46ae40234228561597c'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/Apache::Vhost[heat_api_wsgi]/Concat[10-heat_api_wsgi.conf]/File[/etc/httpd/conf.d/10-heat_api_wsgi.conf]/ensure: defined content as '{md5}b8a7b37ba62328de3aa00ead60950c99'", > "Notice: Applied catalog in 2.70 seconds", > " Total: 121", > " Success: 121", > " Changed: 121", > " Out of sync: 121", > " Skipped: 32", > " Total: 336", > " Cron: 0.01", > " File: 0.38", > " Heat config: 1.63", > " Last run: 1537532640", > " Config retrieval: 4.86", > " Total: 6.93", > " Config: 1537532633", > "Gathering files modified after 2018-09-21 12:23:46.294434305 +0000", > "2018-09-21 12:24:02,940 DEBUG: 28580 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,heat_config,file,concat,file_line ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line'", > "+ origin_of_time=/var/lib/config-data/heat_api.origin_of_time", > "+ touch /var/lib/config-data/heat_api.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line /etc/config.pp", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/db.pp\", 75]:[\"/etc/puppet/modules/heat/manifests/init.pp\", 363]", > "Warning: Scope(Class[Heat::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/heat.pp\", 128]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat_api", > "++ stat -c %y /var/lib/config-data/heat_api.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:23:46.294434305 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat_api", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat_api", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat_api.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/heat_api", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/heat_api --mtime=1970-01-01", > "2018-09-21 12:24:02,940 INFO: 28580 -- Removing container: docker-puppet-heat_api", > "2018-09-21 12:24:03,006 DEBUG: 28580 -- docker-puppet-heat_api", > "2018-09-21 12:24:03,006 INFO: 28580 -- Finished processing puppet configs for heat_api", > "2018-09-21 12:24:03,006 INFO: 28580 -- Starting configuration of heat using image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1", > "2018-09-21 12:24:03,006 DEBUG: 28580 -- config_volume heat", > "2018-09-21 12:24:03,006 DEBUG: 28580 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-09-21 12:24:03,007 DEBUG: 28580 -- manifest include ::tripleo::profile::base::heat::engine", > "2018-09-21 12:24:03,007 DEBUG: 28580 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1", > "2018-09-21 12:24:03,007 DEBUG: 28580 -- volumes []", > "2018-09-21 12:24:03,009 INFO: 28580 -- Removing container: docker-puppet-heat", > "2018-09-21 12:24:03,063 INFO: 28580 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1", > "2018-09-21 12:24:03,066 DEBUG: 28580 -- NET_HOST enabled", > "2018-09-21 12:24:03,066 DEBUG: 28580 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpc14f1P:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1", > "2018-09-21 12:24:03,284 DEBUG: 28582 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.96 seconds", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_token]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_bind_host]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_bind_host]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_port]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_port]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/expiration]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[ssl/enable]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[catalog/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[catalog/template_file]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/provider]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/notification_format]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[eventlet_server/admin_workers]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[eventlet_server/public_workers]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys/0]/ensure: defined content as '{md5}48d05efdc62f04f051fb290b8ad1dcb6'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys/1]/ensure: defined content as '{md5}9a94e8aff5695f89fcea65824727b686'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys/0]/ensure: defined content as '{md5}12eff39a6fa9f6667a20f55b6dd13edc'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys/1]/ensure: defined content as '{md5}94622abbb4b4589a8f2bfb6242f5abb1'", > "Notice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/key_repository]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/revoke_by_id]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/max_active_keys]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[credential/key_repository]/ensure: created", > "Notice: /Stage[main]/Keystone::Config/Keystone_config[ec2/driver]/ensure: created", > "Notice: /Stage[main]/Keystone::Cron::Token_flush/Cron[keystone-manage token_flush]/ensure: created", > "Notice: /Stage[main]/Keystone::Logging/Oslo::Log[keystone_config]/Keystone_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Keystone::Logging/Oslo::Log[keystone_config]/Keystone_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Keystone::Policy/Oslo::Policy[keystone_config]/Keystone_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Middleware[keystone_config]/Keystone_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Default[keystone_config]/Keystone_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}3b5982916d617277a2c5f459ed9cdf83'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_main]/File[keystone_wsgi_main]/ensure: defined content as '{md5}072422f0d75777ed1783e6910b3ddc58'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_admin]/File[keystone_wsgi_admin]/ensure: defined content as '{md5}d6dda52b0e14d80a652ecf42686d3962'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/auth_mellon.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/auth_openidc.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_gssapi.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_mellon.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_openidc.conf]/ensure: removed", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_main]/Apache::Vhost[keystone_wsgi_main]/Concat[10-keystone_wsgi_main.conf]/File[/etc/httpd/conf.d/10-keystone_wsgi_main.conf]/ensure: defined content as '{md5}1e705578c8725cd826e753ed4936f20a'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_admin]/Apache::Vhost[keystone_wsgi_admin]/Concat[10-keystone_wsgi_admin.conf]/File[/etc/httpd/conf.d/10-keystone_wsgi_admin.conf]/ensure: defined content as '{md5}29510559f5851d6ec7a64e5887344d99'", > "Notice: Applied catalog in 2.35 seconds", > " Total: 126", > " Success: 126", > " Changed: 126", > " Out of sync: 126", > " Total: 324", > " Skipped: 34", > " Package: 0.04", > " File: 0.47", > " Keystone config: 1.24", > " Last run: 1537532641", > " Config retrieval: 4.50", > " Total: 6.28", > " Config: 1537532634", > "Gathering files modified after 2018-09-21 12:23:48.067493703 +0000", > "2018-09-21 12:24:03,284 DEBUG: 28582 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config'", > "+ origin_of_time=/var/lib/config-data/keystone.origin_of_time", > "+ touch /var/lib/config-data/keystone.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/policy.pp\", 34]:[\"/etc/puppet/modules/keystone/manifests/init.pp\", 757]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/init.pp\", 760]:[\"/etc/config.pp\", 3]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/init.pp\", 1108]:[\"/etc/config.pp\", 3]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/keystone", > "++ stat -c %y /var/lib/config-data/keystone.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:23:48.067493703 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/keystone", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/keystone", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/keystone.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/keystone", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/keystone --mtime=1970-01-01", > "2018-09-21 12:24:03,285 INFO: 28582 -- Removing container: docker-puppet-keystone", > "2018-09-21 12:24:03,497 DEBUG: 28582 -- docker-puppet-keystone", > "2018-09-21 12:24:03,497 INFO: 28582 -- Finished processing puppet configs for keystone", > "2018-09-21 12:24:03,497 INFO: 28582 -- Starting configuration of memcached using image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-20.1", > "2018-09-21 12:24:03,497 DEBUG: 28582 -- config_volume memcached", > "2018-09-21 12:24:03,497 DEBUG: 28582 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-09-21 12:24:03,497 DEBUG: 28582 -- manifest include ::tripleo::profile::base::memcached", > "2018-09-21 12:24:03,498 DEBUG: 28582 -- config_image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-20.1", > "2018-09-21 12:24:03,498 DEBUG: 28582 -- volumes []", > "2018-09-21 12:24:03,499 INFO: 28582 -- Removing container: docker-puppet-memcached", > "2018-09-21 12:24:03,561 INFO: 28582 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-20.1", > "2018-09-21 12:24:04,975 DEBUG: 28582 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-memcached ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-memcached", > "042053dc046e: Pulling fs layer", > "042053dc046e: Verifying Checksum", > "042053dc046e: Download complete", > "042053dc046e: Pull complete", > "Digest: sha256:8f4edd03defd35eacd874e7ea3f36fb53f5b8c344ab87ab6780d710acfa7a83f", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-20.1", > "2018-09-21 12:24:04,978 DEBUG: 28582 -- NET_HOST enabled", > "2018-09-21 12:24:04,978 DEBUG: 28582 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-memcached --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=memcached --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpKrUrty:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-20.1", > "2018-09-21 12:24:13,131 DEBUG: 28582 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.71 seconds", > "Notice: /Stage[main]/Memcached/File[/etc/sysconfig/memcached]/content: content changed '{md5}a50ed62e82d31fb4cb2de2226650c545' to '{md5}d4625369070b8b665c7472016c787847'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Memcached/Systemd::Dropin_file[memcached.conf]/File[/etc/systemd/system/memcached.service.d]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Memcached/Systemd::Dropin_file[memcached.conf]/File[/etc/systemd/system/memcached.service.d/memcached.conf]/ensure: defined content as '{md5}c44e90292b030f86c3b82096b68fe9cc'", > "Notice: Applied catalog in 0.04 seconds", > " Total: 3", > " Success: 3", > " Skipped: 10", > " File: 0.02", > " Config retrieval: 0.83", > " Total: 0.86", > " Last run: 1537532652", > " Config: 1537532651", > "Gathering files modified after 2018-09-21 12:24:05.173046025 +0000", > "2018-09-21 12:24:13,131 DEBUG: 28582 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/memcached.origin_of_time", > "+ touch /var/lib/config-data/memcached.origin_of_time", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/memcached", > "++ stat -c %y /var/lib/config-data/memcached.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:24:05.173046025 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/memcached", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/memcached", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/memcached.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/memcached", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/memcached --mtime=1970-01-01", > "2018-09-21 12:24:13,131 INFO: 28582 -- Removing container: docker-puppet-memcached", > "2018-09-21 12:24:13,167 DEBUG: 28582 -- docker-puppet-memcached", > "2018-09-21 12:24:13,168 INFO: 28582 -- Finished processing puppet configs for memcached", > "2018-09-21 12:24:13,168 INFO: 28582 -- Starting configuration of panko using image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1", > "2018-09-21 12:24:13,168 DEBUG: 28582 -- config_volume panko", > "2018-09-21 12:24:13,168 DEBUG: 28582 -- puppet_tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config", > "2018-09-21 12:24:13,168 DEBUG: 28582 -- manifest include tripleo::profile::base::panko::api", > "2018-09-21 12:24:13,168 DEBUG: 28582 -- config_image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1", > "2018-09-21 12:24:13,168 DEBUG: 28582 -- volumes []", > "2018-09-21 12:24:13,169 INFO: 28582 -- Removing container: docker-puppet-panko", > "2018-09-21 12:24:13,226 INFO: 28582 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1", > "2018-09-21 12:24:14,919 DEBUG: 28581 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 5.38 seconds", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}0d22f254b2a5718398a333e6c796ad7c'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/File[/var/www/cgi-bin/nova]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/File[nova_api_wsgi]/ensure: defined content as '{md5}8bcfb466d72544dd31a4f339243ed669'", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/instance_name_template]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[wsgi/api_paste_config]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/enabled_apis]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen_port]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen_port]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_volume_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_workers]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_workers]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[api/use_forwarded_for]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[api/fping_path]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[neutron/service_metadata_proxy]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[neutron/metadata_proxy_shared_secret]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/allow_resize_to_same_host]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Conductor/Nova_config[conductor/workers]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/driver]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/discover_hosts_in_cells_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[scheduler/max_attempts]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/host_subset_size]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/max_io_ops_per_host]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/max_instances_per_host]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/weight_classes]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/novncproxy_host]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/novncproxy_port]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/auth_schemes]/ensure: created", > "Notice: /Stage[main]/Nova::Policy/Oslo::Policy[nova_config]/Nova_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Oslo::Middleware[nova_config]/Nova_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Nova::Cron::Archive_deleted_rows/Cron[nova-manage db archive_deleted_rows]/ensure: created", > "Notice: /Stage[main]/Nova::Cron::Purge_shadow_tables/Cron[nova-manage db purge]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/Apache::Vhost[nova_api_wsgi]/Concat[10-nova_api_wsgi.conf]/File[/etc/httpd/conf.d/10-nova_api_wsgi.conf]/ensure: defined content as '{md5}c72c26e9c4c9d9b365e904750d814a48'", > "Notice: Applied catalog in 11.19 seconds", > " Total: 181", > " Success: 181", > " Changed: 181", > " Out of sync: 181", > " Total: 506", > " Skipped: 75", > " Cron: 0.03", > " Package: 0.09", > " Last run: 1537532651", > " Total: 16.54", > " Config retrieval: 6.16", > " Nova config: 9.93", > "Gathering files modified after 2018-09-21 12:23:48.539509431 +0000", > "2018-09-21 12:24:14,919 DEBUG: 28581 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config'", > "+ origin_of_time=/var/lib/config-data/nova.origin_of_time", > "+ touch /var/lib/config-data/nova.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/api.pp\", 97]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 561]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/api.pp\", 97]", > "Warning: Scope(Class[Nova::Api]): Running nova metadata api via evenlet is deprecated and will be removed in Stein release.", > "Warning: Unknown variable: '::nova::api::default_floating_pool'. at /etc/puppet/modules/nova/manifests/network/neutron.pp:112:38", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/nova/manifests/scheduler/filter.pp\", 150]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/scheduler.pp\", 32]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/nova", > "++ stat -c %y /var/lib/config-data/nova.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:23:48.539509431 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/nova.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/nova", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/nova --mtime=1970-01-01", > "2018-09-21 12:24:14,919 INFO: 28581 -- Removing container: docker-puppet-nova", > "2018-09-21 12:24:14,975 DEBUG: 28581 -- docker-puppet-nova", > "2018-09-21 12:24:14,975 INFO: 28581 -- Finished processing puppet configs for nova", > "2018-09-21 12:24:14,975 INFO: 28581 -- Starting configuration of iscsid using image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1", > "2018-09-21 12:24:14,975 DEBUG: 28581 -- config_volume iscsid", > "2018-09-21 12:24:14,975 DEBUG: 28581 -- puppet_tags file,file_line,concat,augeas,cron,iscsid_config", > "2018-09-21 12:24:14,975 DEBUG: 28581 -- manifest include ::tripleo::profile::base::iscsid", > "2018-09-21 12:24:14,975 DEBUG: 28581 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1", > "2018-09-21 12:24:14,976 DEBUG: 28581 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-09-21 12:24:14,979 INFO: 28581 -- Removing container: docker-puppet-iscsid", > "2018-09-21 12:24:15,055 INFO: 28581 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1", > "2018-09-21 12:24:15,502 DEBUG: 28580 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.01 seconds", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/auth_encryption_key]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_metadata_server_url]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_waitcondition_server_url]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/max_resources_per_stack]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/num_engine_workers]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/convergence_engine]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/reauthentication_auth_method]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/max_nested_stack_depth]/ensure: created", > "Notice: Applied catalog in 1.89 seconds", > " Total: 48", > " Success: 48", > " Skipped: 21", > " Total: 223", > " Out of sync: 48", > " Changed: 48", > " Cron: 0.07", > " Heat config: 1.55", > " Last run: 1537532653", > " Config retrieval: 2.26", > " Total: 3.94", > " Config: 1537532649", > "Gathering files modified after 2018-09-21 12:24:03.280986726 +0000", > "2018-09-21 12:24:15,502 DEBUG: 28580 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/heat.origin_of_time", > "+ touch /var/lib/config-data/heat.origin_of_time", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat", > "++ stat -c %y /var/lib/config-data/heat.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:24:03.280986726 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/heat", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/heat --mtime=1970-01-01", > "2018-09-21 12:24:15,502 INFO: 28580 -- Removing container: docker-puppet-heat", > "2018-09-21 12:24:15,545 DEBUG: 28580 -- docker-puppet-heat", > "2018-09-21 12:24:15,545 INFO: 28580 -- Finished processing puppet configs for heat", > "2018-09-21 12:24:15,545 INFO: 28580 -- Starting configuration of cinder using image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", > "2018-09-21 12:24:15,545 DEBUG: 28580 -- config_volume cinder", > "2018-09-21 12:24:15,545 DEBUG: 28580 -- puppet_tags file,file_line,concat,augeas,cron,cinder_config,cinder_type,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line", > "2018-09-21 12:24:15,546 DEBUG: 28580 -- manifest include ::tripleo::profile::base::cinder::api", > "include ::tripleo::profile::base::cinder::backup::ceph", > "include ::tripleo::profile::base::cinder::scheduler", > "include ::tripleo::profile::base::lvm", > "2018-09-21 12:24:15,546 DEBUG: 28580 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", > "2018-09-21 12:24:15,546 DEBUG: 28580 -- volumes []", > "2018-09-21 12:24:15,547 INFO: 28580 -- Removing container: docker-puppet-cinder", > "2018-09-21 12:24:15,586 DEBUG: 28582 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-panko-api ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-panko-api", > "098f714456f9: Pulling fs layer", > "1a49a7af296b: Pulling fs layer", > "1a49a7af296b: Download complete", > "098f714456f9: Download complete", > "098f714456f9: Pull complete", > "1a49a7af296b: Pull complete", > "Digest: sha256:5024be860afb115a89af5de65fe39d09c51b81e029a94926ae791ceacf0fa0aa", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1", > "2018-09-21 12:24:15,588 DEBUG: 28582 -- NET_HOST enabled", > "2018-09-21 12:24:15,589 DEBUG: 28582 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-panko --env PUPPET_TAGS=file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config --env NAME=panko --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpDKAqlu:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1", > "2018-09-21 12:24:15,622 INFO: 28580 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", > "2018-09-21 12:24:15,719 DEBUG: 28581 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-iscsid ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-iscsid", > "a1f7d1c27dcc: Pulling fs layer", > "a1f7d1c27dcc: Verifying Checksum", > "a1f7d1c27dcc: Download complete", > "a1f7d1c27dcc: Pull complete", > "Digest: sha256:b011eb80fb9a37540d4134254671d70667eedc22278e2d3d9d0e5bd1c8c9316f", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1", > "2018-09-21 12:24:15,722 DEBUG: 28581 -- NET_HOST enabled", > "2018-09-21 12:24:15,722 DEBUG: 28581 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-iscsid --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpN6S7Of:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/iscsi:/etc/iscsi --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1", > "2018-09-21 12:24:24,236 DEBUG: 28580 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-api ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-api", > "020a12d8eacf: Pulling fs layer", > "af35c09c9977: Pulling fs layer", > "af35c09c9977: Verifying Checksum", > "af35c09c9977: Download complete", > "020a12d8eacf: Verifying Checksum", > "020a12d8eacf: Download complete", > "020a12d8eacf: Pull complete", > "af35c09c9977: Pull complete", > "Digest: sha256:9051541a65878f47be917eaeab76f860c1103c98d4261fbf2557787df8892244", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", > "2018-09-21 12:24:24,239 DEBUG: 28580 -- NET_HOST enabled", > "2018-09-21 12:24:24,239 DEBUG: 28580 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-cinder --env PUPPET_TAGS=file,file_line,concat,augeas,cron,cinder_config,cinder_type,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line --env NAME=cinder --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpNvwnhj:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", > "2018-09-21 12:24:24,397 DEBUG: 28581 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created", > " Total: 2", > " Success: 2", > " Total: 10", > " Out of sync: 2", > " Changed: 2", > " Skipped: 8", > " Exec: 0.02", > " Config retrieval: 0.55", > " Total: 0.57", > " Last run: 1537532663", > " Config: 1537532662", > "Gathering files modified after 2018-09-21 12:24:16.358387758 +0000", > "2018-09-21 12:24:24,398 DEBUG: 28581 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,iscsid_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,iscsid_config'", > "+ origin_of_time=/var/lib/config-data/iscsid.origin_of_time", > "+ touch /var/lib/config-data/iscsid.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,iscsid_config /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/iscsid", > "++ stat -c %y /var/lib/config-data/iscsid.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:24:16.358387758 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/iscsid", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/iscsid", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/iscsid.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/iscsid", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/iscsid --mtime=1970-01-01", > "2018-09-21 12:24:24,398 INFO: 28581 -- Removing container: docker-puppet-iscsid", > "2018-09-21 12:24:24,432 DEBUG: 28581 -- docker-puppet-iscsid", > "2018-09-21 12:24:24,433 INFO: 28581 -- Finished processing puppet configs for iscsid", > "2018-09-21 12:24:24,433 INFO: 28581 -- Starting configuration of glance_api using image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1", > "2018-09-21 12:24:24,433 DEBUG: 28581 -- config_volume glance_api", > "2018-09-21 12:24:24,433 DEBUG: 28581 -- puppet_tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config", > "2018-09-21 12:24:24,433 DEBUG: 28581 -- manifest include ::tripleo::profile::base::glance::api", > "2018-09-21 12:24:24,433 DEBUG: 28581 -- config_image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1", > "2018-09-21 12:24:24,433 DEBUG: 28581 -- volumes []", > "2018-09-21 12:24:24,434 INFO: 28581 -- Removing container: docker-puppet-glance_api", > "2018-09-21 12:24:24,510 INFO: 28581 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1", > "2018-09-21 12:24:29,794 DEBUG: 28582 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.07 seconds", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/host]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/port]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/workers]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/max_limit]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[database/event_time_to_live]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_api_paste_ini[pipeline:main/pipeline]/ensure: created", > "Notice: /Stage[main]/Panko::Expirer/Cron[panko-expirer]/ensure: created", > "Notice: /Stage[main]/Panko::Logging/Oslo::Log[panko_config]/Panko_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Panko::Db/Oslo::Db[panko_config]/Panko_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Panko::Policy/Oslo::Policy[panko_config]/Panko_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Oslo::Middleware[panko_config]/Panko_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}68d59f63cba1d56260776a931e6191fa'", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/File[/var/www/cgi-bin/panko]/ensure: created", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/File[panko_wsgi]/ensure: defined content as '{md5}e6f446b6267321fd2251a3e83021181a'", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/Apache::Vhost[panko_wsgi]/Concat[10-panko_wsgi.conf]/File[/etc/httpd/conf.d/10-panko_wsgi.conf]/ensure: defined content as '{md5}734740dfe020ea0bb1997ca0c192378b'", > "Notice: Applied catalog in 1.12 seconds", > " Total: 101", > " Success: 101", > " Changed: 101", > " Out of sync: 101", > " Total: 256", > " Panko api paste ini: 0.00", > " Panko config: 0.14", > " File: 0.35", > " Last run: 1537532667", > " Config retrieval: 4.54", > " Total: 5.12", > "Gathering files modified after 2018-09-21 12:24:15.801371094 +0000", > "2018-09-21 12:24:29,794 DEBUG: 28582 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config'", > "+ origin_of_time=/var/lib/config-data/panko.origin_of_time", > "+ touch /var/lib/config-data/panko.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/panko/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/panko.pp\", 32]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/panko/manifests/db.pp\", 59]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/panko/api.pp\", 83]", > "Warning: Scope(Class[Panko::Api]): This Class is deprecated and will be removed in future releases.", > "Warning: Scope(Class[Panko::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/panko", > "++ stat -c %y /var/lib/config-data/panko.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:24:15.801371094 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/panko", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/panko", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/panko.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/panko", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/panko --mtime=1970-01-01", > "2018-09-21 12:24:29,794 INFO: 28582 -- Removing container: docker-puppet-panko", > "2018-09-21 12:24:29,844 DEBUG: 28582 -- docker-puppet-panko", > "2018-09-21 12:24:29,844 INFO: 28582 -- Finished processing puppet configs for panko", > "2018-09-21 12:24:29,845 INFO: 28582 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", > "2018-09-21 12:24:29,845 DEBUG: 28582 -- config_volume crond", > "2018-09-21 12:24:29,845 DEBUG: 28582 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-09-21 12:24:29,845 DEBUG: 28582 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-09-21 12:24:29,845 DEBUG: 28582 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", > "2018-09-21 12:24:29,845 DEBUG: 28582 -- volumes []", > "2018-09-21 12:24:29,847 INFO: 28582 -- Removing container: docker-puppet-crond", > "2018-09-21 12:24:29,911 INFO: 28582 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", > "2018-09-21 12:24:30,112 DEBUG: 28581 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-glance-api ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-glance-api", > "da1548463c1c: Pulling fs layer", > "5b15c9b8dc07: Pulling fs layer", > "5b15c9b8dc07: Download complete", > "da1548463c1c: Verifying Checksum", > "da1548463c1c: Download complete", > "da1548463c1c: Pull complete", > "5b15c9b8dc07: Pull complete", > "Digest: sha256:e205262a40d4390232863fd1d3f5aada9904906879b034157532a45a7c44c70e", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1", > "2018-09-21 12:24:30,115 DEBUG: 28581 -- NET_HOST enabled", > "2018-09-21 12:24:30,116 DEBUG: 28581 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-glance_api --env PUPPET_TAGS=file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config --env NAME=glance_api --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpUAHAq5:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1", > "2018-09-21 12:24:30,701 DEBUG: 28582 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "6e472b601f7f: Pulling fs layer", > "6e472b601f7f: Verifying Checksum", > "6e472b601f7f: Download complete", > "6e472b601f7f: Pull complete", > "Digest: sha256:a274bd4ffbb72a4ab1b5f789df588f93c632040cf5a8c84ccc51059d05f38637", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", > "2018-09-21 12:24:30,704 DEBUG: 28582 -- NET_HOST enabled", > "2018-09-21 12:24:30,704 DEBUG: 28582 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpnSX7uK:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", > "2018-09-21 12:24:38,864 DEBUG: 28582 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.48 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}f121ac457cb6e71964450c8cbc0a2431'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > " Skipped: 7", > " Total: 9", > " Config retrieval: 0.59", > " Total: 0.60", > " Last run: 1537532677", > " Config: 1537532677", > "Gathering files modified after 2018-09-21 12:24:30.899810320 +0000", > "2018-09-21 12:24:38,864 DEBUG: 28582 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:24:30.899810320 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/crond", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-09-21 12:24:38,864 INFO: 28582 -- Removing container: docker-puppet-crond", > "2018-09-21 12:24:38,897 DEBUG: 28582 -- docker-puppet-crond", > "2018-09-21 12:24:38,897 INFO: 28582 -- Finished processing puppet configs for crond", > "2018-09-21 12:24:38,898 INFO: 28582 -- Starting configuration of haproxy using image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1", > "2018-09-21 12:24:38,898 DEBUG: 28582 -- config_volume haproxy", > "2018-09-21 12:24:38,898 DEBUG: 28582 -- puppet_tags file,file_line,concat,augeas,cron,haproxy_config", > "2018-09-21 12:24:38,898 DEBUG: 28582 -- manifest exec {'wait-for-settle': command => '/bin/true' }", > "2018-09-21 12:24:38,898 DEBUG: 28582 -- config_image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1", > "2018-09-21 12:24:38,898 DEBUG: 28582 -- volumes [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']", > "2018-09-21 12:24:38,899 INFO: 28582 -- Removing container: docker-puppet-haproxy", > "2018-09-21 12:24:38,964 INFO: 28582 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1", > "2018-09-21 12:24:43,091 DEBUG: 28582 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-haproxy ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-haproxy", > "fd8bd7241498: Pulling fs layer", > "fd8bd7241498: Verifying Checksum", > "fd8bd7241498: Download complete", > "fd8bd7241498: Pull complete", > "Digest: sha256:fd5585aad48c3f40503f65bd6283d91380286f9fe57f26b699033887261cf5c6", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1", > "2018-09-21 12:24:43,094 DEBUG: 28582 -- NET_HOST enabled", > "2018-09-21 12:24:43,095 DEBUG: 28582 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-haproxy --env PUPPET_TAGS=file,file_line,concat,augeas,cron,haproxy_config --env NAME=haproxy --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpGJzFMu:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/ipa/ca.crt:/etc/ipa/ca.crt:ro --volume /etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro --volume /etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro --volume /etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1", > "2018-09-21 12:24:44,494 DEBUG: 28580 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.45 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Lvm/Augeas[udev options in lvm.conf]/returns: executed successfully", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}1aaf743a6205f0d7260b7ed58056dff2'", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/api_paste_config]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/storage_availability_zone]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/default_availability_zone]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/enable_v3_api]/ensure: created", > "Notice: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_servers]/ensure: created", > "Notice: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_version]/ensure: created", > "Notice: /Stage[main]/Cinder::Cron::Db_purge/Cron[cinder-manage db purge]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_listen]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_workers]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/default_volume_type]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/nova_catalog_info]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[key_manager/backend]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_user]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_chunk_size]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_pool]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_stripe_unit]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_stripe_count]/ensure: created", > "Notice: /Stage[main]/Cinder::Scheduler/Cinder_config[DEFAULT/scheduler_driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Backends/Cinder_config[DEFAULT/enabled_backends]/ensure: created", > "Notice: /Stage[main]/Cinder::Backends/Cinder_config[tripleo_ceph/backend_host]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Cinder::Policy/Oslo::Policy[cinder_config]/Cinder_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Oslo::Middleware[cinder_config]/Cinder_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Wsgi::Apache/Openstacklib::Wsgi::Apache[cinder_wsgi]/File[cinder_wsgi]/ensure: defined content as '{md5}870efbe437d63cd260287cd36472d7b1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_backend_name]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_driver]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_user]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_pool]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_secret_uuid]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File[/etc/sysconfig/openstack-cinder-volume]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File_line[set initscript env tripleo_ceph]/ensure: created", > "Notice: /Stage[main]/Cinder::Wsgi::Apache/Openstacklib::Wsgi::Apache[cinder_wsgi]/Apache::Vhost[cinder_wsgi]/Concat[10-cinder_wsgi.conf]/File[/etc/httpd/conf.d/10-cinder_wsgi.conf]/ensure: defined content as '{md5}7d44eb99f91f9e03e734cca444e97c9b'", > "Notice: Applied catalog in 5.47 seconds", > " Total: 135", > " Success: 135", > " Changed: 135", > " Out of sync: 135", > " Skipped: 37", > " File line: 0.00", > " File: 0.30", > " Augeas: 0.70", > " Last run: 1537532681", > " Cinder config: 3.63", > " Config retrieval: 5.09", > " Total: 9.79", > " Config: 1537532671", > "Gathering files modified after 2018-09-21 12:24:24.468626382 +0000", > "2018-09-21 12:24:44,495 DEBUG: 28580 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,cinder_config,cinder_type,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,cinder_config,cinder_type,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line'", > "+ origin_of_time=/var/lib/config-data/cinder.origin_of_time", > "+ touch /var/lib/config-data/cinder.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,cinder_config,cinder_type,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line /etc/config.pp", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/db.pp\", 69]:[\"/etc/puppet/modules/cinder/manifests/init.pp\", 320]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/config.pp\", 38]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/cinder.pp\", 127]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/api.pp\", 203]:[\"/etc/config.pp\", 2]", > "Warning: Scope(Class[Cinder::Api]): The nova_catalog_admin_info parameter has been deprecated and will be removed in the future release.", > "Warning: Scope(Class[Cinder::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Unknown variable: 'ensure'. at /etc/puppet/modules/cinder/manifests/backup.pp:83:18", > "Warning: Unknown variable: 'ensure'. at /etc/puppet/modules/cinder/manifests/volume.pp:64:18", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/cinder", > "++ stat -c %y /var/lib/config-data/cinder.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:24:24.468626382 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/cinder", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/cinder", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/cinder.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/cinder", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/cinder --mtime=1970-01-01", > "2018-09-21 12:24:44,495 INFO: 28580 -- Removing container: docker-puppet-cinder", > "2018-09-21 12:24:44,544 DEBUG: 28580 -- docker-puppet-cinder", > "2018-09-21 12:24:44,544 INFO: 28580 -- Finished processing puppet configs for cinder", > "2018-09-21 12:24:44,544 INFO: 28580 -- Starting configuration of swift using image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1", > "2018-09-21 12:24:44,544 DEBUG: 28580 -- config_volume swift", > "2018-09-21 12:24:44,544 DEBUG: 28580 -- puppet_tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server", > "2018-09-21 12:24:44,544 DEBUG: 28580 -- manifest include ::tripleo::profile::base::swift::proxy", > "include ::tripleo::profile::base::swift::storage", > "2018-09-21 12:24:44,544 DEBUG: 28580 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1", > "2018-09-21 12:24:44,545 DEBUG: 28580 -- volumes []", > "2018-09-21 12:24:44,546 INFO: 28580 -- Removing container: docker-puppet-swift", > "2018-09-21 12:24:44,556 DEBUG: 28581 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.44 seconds", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_port]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/workers]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_image_direct_url]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_multiple_locations]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_cache_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enabled_import_methods]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/node_staging_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_member_quota]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enable_v1_api]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enable_v2_api]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/os_region_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/stores]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_cache_config[glance_store/os_region_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/registry_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/registry_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[paste_deploy/flavor]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_user]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_pool]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/default_store]/ensure: created", > "Notice: /Stage[main]/Glance::Policy/Oslo::Policy[glance_api_config]/Glance_api_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Db/Oslo::Db[glance_api_config]/Glance_api_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_file]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/log_file]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Oslo::Middleware[glance_api_config]/Glance_api_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Rabbit[glance_api_config]/Glance_api_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Default[glance_api_config]/Glance_api_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Notifications[glance_api_config]/Glance_api_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Notifications[glance_api_config]/Glance_api_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: Applied catalog in 2.94 seconds", > " Total: 44", > " Success: 44", > " Total: 255", > " Out of sync: 44", > " Changed: 44", > " Skipped: 60", > " Glance cache config: 0.15", > " Last run: 1537532682", > " Glance api config: 2.30", > " Config retrieval: 2.74", > " Total: 5.27", > " Config: 1537532676", > "Gathering files modified after 2018-09-21 12:24:30.312793716 +0000", > "2018-09-21 12:24:44,556 DEBUG: 28581 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config'", > "+ origin_of_time=/var/lib/config-data/glance_api.origin_of_time", > "+ touch /var/lib/config-data/glance_api.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/glance/manifests/config.pp\", 48]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/glance/api.pp\", 198]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/glance/manifests/api/db.pp\", 69]:[\"/etc/puppet/modules/glance/manifests/api.pp\", 371]", > "Warning: Unknown variable: 'default_store_real'. at /etc/puppet/modules/glance/manifests/api.pp:438:9", > "Warning: Scope(Class[Glance::Api]): default_store not provided, it will be automatically set to http", > "Warning: Scope(Class[Glance::Api::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/glance_api", > "++ stat -c %y /var/lib/config-data/glance_api.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:24:30.312793716 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/glance_api", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/glance_api", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/glance_api.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/glance_api", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/glance_api --mtime=1970-01-01", > "2018-09-21 12:24:44,556 INFO: 28581 -- Removing container: docker-puppet-glance_api", > "2018-09-21 12:24:44,591 DEBUG: 28581 -- docker-puppet-glance_api", > "2018-09-21 12:24:44,592 INFO: 28581 -- Finished processing puppet configs for glance_api", > "2018-09-21 12:24:44,592 INFO: 28581 -- Starting configuration of rabbitmq using image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1", > "2018-09-21 12:24:44,592 DEBUG: 28581 -- config_volume rabbitmq", > "2018-09-21 12:24:44,592 DEBUG: 28581 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-09-21 12:24:44,592 DEBUG: 28581 -- manifest ['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }", > "2018-09-21 12:24:44,592 DEBUG: 28581 -- config_image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1", > "2018-09-21 12:24:44,592 DEBUG: 28581 -- volumes []", > "2018-09-21 12:24:44,594 INFO: 28581 -- Removing container: docker-puppet-rabbitmq", > "2018-09-21 12:24:44,598 INFO: 28580 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1", > "2018-09-21 12:24:44,601 DEBUG: 28580 -- NET_HOST enabled", > "2018-09-21 12:24:44,602 DEBUG: 28580 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-swift --env PUPPET_TAGS=file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server --env NAME=swift --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpFqwCUE:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1", > "2018-09-21 12:24:44,662 INFO: 28581 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1", > "2018-09-21 12:24:49,220 DEBUG: 28581 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-rabbitmq ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-rabbitmq", > "892d16252a40: Pulling fs layer", > "892d16252a40: Verifying Checksum", > "892d16252a40: Download complete", > "892d16252a40: Pull complete", > "Digest: sha256:fadbe923f17aaf0b51f052fb93a0fab01d89498f32620c29671273a34e585e97", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1", > "2018-09-21 12:24:49,223 DEBUG: 28581 -- NET_HOST enabled", > "2018-09-21 12:24:49,223 DEBUG: 28581 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-rabbitmq --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=rabbitmq --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpWwZgg6:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1", > "2018-09-21 12:24:54,981 DEBUG: 28582 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.69 seconds", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/content: content changed '{md5}1f337186b0e1ba5ee82760cb437fb810' to '{md5}9200d9c4be339b9075c4e48ff8a14619'", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/mode: mode changed '0644' to '0640'", > "Notice: Applied catalog in 0.46 seconds", > " Changed: 1", > " Out of sync: 1", > " Total: 76", > " Last run: 1537532693", > " Config retrieval: 2.99", > " Total: 3.03", > " Config: 1537532690", > "Gathering files modified after 2018-09-21 12:24:43.351153566 +0000", > "2018-09-21 12:24:54,981 DEBUG: 28582 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,haproxy_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,haproxy_config'", > "+ origin_of_time=/var/lib/config-data/haproxy.origin_of_time", > "+ touch /var/lib/config-data/haproxy.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,haproxy_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/pacemaker/haproxy_with_vip.pp\", 65]:", > "Warning: Scope(Haproxy::Config[haproxy]): haproxy: The $merge_options parameter will default to true in the next major release. Please review the documentation regarding the implications.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/haproxy", > "++ stat -c %y /var/lib/config-data/haproxy.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:24:43.351153566 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/haproxy", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/haproxy", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/haproxy.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/haproxy", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/haproxy --mtime=1970-01-01", > "2018-09-21 12:24:54,981 INFO: 28582 -- Removing container: docker-puppet-haproxy", > "2018-09-21 12:24:55,026 DEBUG: 28582 -- docker-puppet-haproxy", > "2018-09-21 12:24:55,027 INFO: 28582 -- Finished processing puppet configs for haproxy", > "2018-09-21 12:24:55,027 INFO: 28582 -- Starting configuration of ceilometer using image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1", > "2018-09-21 12:24:55,027 DEBUG: 28582 -- config_volume ceilometer", > "2018-09-21 12:24:55,027 DEBUG: 28582 -- puppet_tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config", > "2018-09-21 12:24:55,027 DEBUG: 28582 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "include ::tripleo::profile::base::ceilometer::agent::notification", > "2018-09-21 12:24:55,027 DEBUG: 28582 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1", > "2018-09-21 12:24:55,027 DEBUG: 28582 -- volumes []", > "2018-09-21 12:24:55,030 INFO: 28582 -- Removing container: docker-puppet-ceilometer", > "2018-09-21 12:24:55,105 INFO: 28582 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1", > "2018-09-21 12:24:55,634 DEBUG: 28580 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.98 seconds", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/api_class]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/username]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/project_name]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[filter:cache/memcache_servers]/value: value changed '127.0.0.1:11211' to '172.17.1.17:11211'", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/auto_create_account_prefix]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/concurrency]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/expiring_objects_account_name]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/interval]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/process]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/processes]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/reclaim_age]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/recon_cache_path]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/report_interval]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/log_level]/ensure: created", > "Notice: /Stage[main]/Rsync::Server/Xinetd::Service[rsync]/File[/rsync]/ensure: defined content as '{md5}6cb19d81001ba066a21daeaae3f18acb'", > "Notice: /Stage[main]/Rsync::Server/Concat[/etc/rsyncd.conf]/File[/etc/rsyncd.conf]/content: content changed '{md5}c63fccb45c0dcbbbe17d0f4bdba920ec' to '{md5}4f6ae0be51bb2ac623e0b7d0e814a1ee'", > "Notice: /Stage[main]/Swift/Swift_config[swift-hash/swift_hash_path_suffix]/value: value changed '%SWIFT_HASH_PATH_SUFFIX%' to '2FY3AYR06beiOk8p0zFCB8vJk'", > "Notice: /Stage[main]/Swift/Swift_config[swift-constraints/max_header_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/bind_ip]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/workers]/value: value changed '8' to 'auto'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_level]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_headers]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_address]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[pipeline:main/pipeline]/value: value changed 'catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server' to 'catch_errors healthcheck proxy-logging cache ratelimit bulk tempurl formpost authtoken s3api s3token keystone staticweb copy container_quotas account_quotas slo dlo versioned_writes proxy-logging proxy-server'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_level]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_address]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/log_handoffs]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/allow_account_management]/value: value changed 'true' to 'True'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/account_autocreate]/value: value changed 'true' to 'True'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/node_timeout]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Cache/Swift_proxy_config[filter:cache/memcache_servers]/value: value changed '127.0.0.1:11211' to '172.17.1.17:11211'", > "Notice: /Stage[main]/Swift::Proxy::Keystone/Swift_proxy_config[filter:keystone/operator_roles]/value: value changed 'admin, SwiftOperator' to 'admin, swiftoperator, ResellerAdmin'", > "Notice: /Stage[main]/Swift::Proxy::Keystone/Swift_proxy_config[filter:keystone/reseller_prefix]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/File[/var/cache/swift]/mode: mode changed '0755' to '0700'", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/signing_dir]/value: value changed '/tmp/keystone-signing-swift' to '/var/cache/swift'", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_plugin]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/username]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/password]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/delay_auth_decision]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/cache]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/include_service_catalog]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Staticweb/Swift_proxy_config[filter:staticweb/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Staticweb/Swift_proxy_config[filter:staticweb/url_base]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/clock_accuracy]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/max_sleep_time_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/log_sleep_time_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/rate_buffer_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/account_ratelimit]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Formpost/Swift_proxy_config[filter:formpost/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_containers_per_extraction]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_failed_extractions]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_deletes_per_request]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/yield_frequency]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Versioned_writes/Swift_proxy_config[filter:versioned_writes/allow_versioned_writes]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_manifest_segments]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_manifest_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/min_segment_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/rate_limit_after_segment]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/rate_limit_segments_per_sec]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_get_time]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/rate_limit_after_segment]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/rate_limit_segments_per_sec]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/max_get_time]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Copy/Swift_proxy_config[filter:copy/object_post_as_copy]/value: value changed 'false' to 'True'", > "Notice: /Stage[main]/Swift::Proxy::Container_quotas/Swift_proxy_config[filter:container_quotas/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Account_quotas/Swift_proxy_config[filter:account_quotas/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Encryption/Swift_proxy_config[filter:encryption/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Encryption/Swift_proxy_config[filter:encryption/disable_encryption]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Kms_keymaster/Swift_proxy_config[filter:kms_keymaster/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Kms_keymaster/Swift_proxy_config[filter:kms_keymaster/keymaster_config_path]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3api/Swift_proxy_config[filter:s3api/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3api/Swift_proxy_config[filter:s3api/auth_pipeline_check]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/auth_uri]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Storage/File[/srv/node]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Storage/File[/srv/node/d1]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/File[/etc/swift/account-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/File[/etc/swift/container-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/File[/etc/swift/object-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/etc/swift/account-server.conf]/ensure: defined content as '{md5}7a29c9aef6bebaf1f575832e5b269043'", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/etc/swift/container-server.conf]/ensure: defined content as '{md5}83e075165d02c8dc54999c0a61da5f7d'", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/etc/swift/object-server.conf]/ensure: defined content as '{md5}119aedac3709a15cf5c6ff801d140433'", > "Notice: Applied catalog in 0.66 seconds", > " Total: 97", > " Success: 97", > " Total: 192", > " Out of sync: 97", > " Changed: 97", > " Swift config: 0.00", > " Swift keymaster config: 0.01", > " Swift object expirer config: 0.01", > " File: 0.09", > " Swift proxy config: 0.19", > " Config retrieval: 2.32", > " Total: 2.62", > " Config: 1537532691", > "Gathering files modified after 2018-09-21 12:24:44.811192739 +0000", > "2018-09-21 12:24:55,634 DEBUG: 28580 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server'", > "+ origin_of_time=/var/lib/config-data/swift.origin_of_time", > "+ touch /var/lib/config-data/swift.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/config.pp\", 38]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 147]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/proxy.pp\", 163]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 148]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/proxy.pp\", 165]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 148]", > "Warning: Unknown variable: 'methods_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:100:56", > "Warning: Unknown variable: 'incoming_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:101:56", > "Warning: Unknown variable: 'incoming_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:102:56", > "Warning: Unknown variable: 'outgoing_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:103:56", > "Warning: Unknown variable: 'outgoing_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:104:56", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the object storage server has changed from 6000 to 6200 and will be changed in a later release", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the container storage server has changed from 6001 to 6201 and will be changed in a later release", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the account storage server has changed from 6002 to 6202 and will be changed in a later release", > "Warning: Class 'xinetd' is already defined at /etc/config.pp:6; cannot redefine at /etc/puppet/modules/xinetd/manifests/init.pp:12", > "Warning: Unknown variable: 'xinetd::params::default_user'. at /etc/puppet/modules/xinetd/manifests/service.pp:110:14", > "Warning: Unknown variable: 'xinetd::params::default_group'. at /etc/puppet/modules/xinetd/manifests/service.pp:116:15", > "Warning: Unknown variable: 'xinetd::confdir'. at /etc/puppet/modules/xinetd/manifests/service.pp:161:13", > "Warning: Unknown variable: 'xinetd::service_name'. at /etc/puppet/modules/xinetd/manifests/service.pp:166:24", > "Warning: Unknown variable: 'xinetd::confdir'. at /etc/puppet/modules/xinetd/manifests/service.pp:167:21", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/swift/manifests/storage/server.pp\", 189]:", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/storage/server.pp\", 203]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/swift", > "++ stat -c %y /var/lib/config-data/swift.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:24:44.811192739 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/swift", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/swift", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/swift.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/swift", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/swift --mtime=1970-01-01", > "2018-09-21 12:24:55,634 INFO: 28580 -- Removing container: docker-puppet-swift", > "2018-09-21 12:24:55,679 DEBUG: 28580 -- docker-puppet-swift", > "2018-09-21 12:24:55,680 INFO: 28580 -- Finished processing puppet configs for swift", > "2018-09-21 12:24:55,680 INFO: 28580 -- Starting configuration of heat_api_cfn using image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-20.1", > "2018-09-21 12:24:55,680 DEBUG: 28580 -- config_volume heat_api_cfn", > "2018-09-21 12:24:55,680 DEBUG: 28580 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-09-21 12:24:55,680 DEBUG: 28580 -- manifest include ::tripleo::profile::base::heat::api_cfn", > "2018-09-21 12:24:55,680 DEBUG: 28580 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-20.1", > "2018-09-21 12:24:55,681 DEBUG: 28580 -- volumes []", > "2018-09-21 12:24:55,682 INFO: 28580 -- Removing container: docker-puppet-heat_api_cfn", > "2018-09-21 12:24:55,758 INFO: 28580 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-20.1", > "2018-09-21 12:24:56,382 DEBUG: 28580 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn", > "c59832cf029f: Already exists", > "cb12dd03d5ce: Pulling fs layer", > "cb12dd03d5ce: Verifying Checksum", > "cb12dd03d5ce: Download complete", > "cb12dd03d5ce: Pull complete", > "Digest: sha256:35c83f306fcd2d7e9d92bcf2b1efa98242f8959f6ae41a5104a8d1848ff43cbb", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-20.1", > "2018-09-21 12:24:56,385 DEBUG: 28580 -- NET_HOST enabled", > "2018-09-21 12:24:56,385 DEBUG: 28580 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat_api_cfn --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat_api_cfn --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpInprir:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-20.1", > "2018-09-21 12:24:57,477 DEBUG: 28582 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-central ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-central", > "39327dc96373: Pulling fs layer", > "7c3fd050f245: Pulling fs layer", > "39327dc96373: Verifying Checksum", > "39327dc96373: Download complete", > "7c3fd050f245: Verifying Checksum", > "7c3fd050f245: Download complete", > "39327dc96373: Pull complete", > "7c3fd050f245: Pull complete", > "Digest: sha256:a31d498553693f09b6d3e9a981237ed4c4a1f12a50cbb87cfadb74c9b99a5f63", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1", > "2018-09-21 12:24:57,481 DEBUG: 28582 -- NET_HOST enabled", > "2018-09-21 12:24:57,481 DEBUG: 28582 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-ceilometer --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpF9ZyUh:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1", > "2018-09-21 12:25:03,080 DEBUG: 28581 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.84 seconds", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/group: group changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]/ensure: created", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]/ensure: defined content as '{md5}d2eee02b74d42601e57574435e10a026'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]/ensure: defined content as '{md5}12f8d1a1f9f57f23c1be6c7bf2286e73'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmqadmin.conf]/ensure: defined content as '{md5}44d4ef5cb86ab30e6127e83939ef09c4'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d]/ensure: created", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]/ensure: defined content as '{md5}91d370d2c5a1af171c9d5b5985fca733'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]/ensure: defined content as '{md5}1030abc4db405b5f2969643e99bc7435'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/content: content changed '{md5}b346ec0a8320f85f795bf612f6b02da7' to '{md5}35367bd5f007dc7f00cc3ce2285bbe67'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/mode: mode changed '0644' to '0640'", > "Notice: Applied catalog in 0.06 seconds", > " Total: 12", > " Success: 12", > " Total: 19", > " Out of sync: 9", > " Changed: 9", > " Config retrieval: 0.96", > " Total: 0.99", > " Last run: 1537532701", > " Config: 1537532700", > "Gathering files modified after 2018-09-21 12:24:49.400314406 +0000", > "2018-09-21 12:25:03,081 DEBUG: 28581 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/rabbitmq.origin_of_time", > "+ touch /var/lib/config-data/rabbitmq.origin_of_time", > "Warning: ModuleLoader: module 'rabbitmq' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/rabbitmq", > "++ stat -c %y /var/lib/config-data/rabbitmq.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:24:49.400314406 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/rabbitmq", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/rabbitmq", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/rabbitmq.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/rabbitmq", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/rabbitmq --mtime=1970-01-01", > "2018-09-21 12:25:03,081 INFO: 28581 -- Removing container: docker-puppet-rabbitmq", > "2018-09-21 12:25:03,126 DEBUG: 28581 -- docker-puppet-rabbitmq", > "2018-09-21 12:25:03,126 INFO: 28581 -- Finished processing puppet configs for rabbitmq", > "2018-09-21 12:25:03,126 INFO: 28581 -- Starting configuration of neutron using image 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", > "2018-09-21 12:25:03,127 DEBUG: 28581 -- config_volume neutron", > "2018-09-21 12:25:03,127 DEBUG: 28581 -- puppet_tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-09-21 12:25:03,127 DEBUG: 28581 -- manifest include tripleo::profile::base::neutron::server", > "include ::tripleo::profile::base::neutron::plugins::ml2", > "include tripleo::profile::base::neutron::dhcp", > "include tripleo::profile::base::neutron::l3", > "include tripleo::profile::base::neutron::metadata", > "include ::tripleo::profile::base::neutron::ovs", > "2018-09-21 12:25:03,127 DEBUG: 28581 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", > "2018-09-21 12:25:03,127 DEBUG: 28581 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-09-21 12:25:03,128 INFO: 28581 -- Removing container: docker-puppet-neutron", > "2018-09-21 12:25:03,194 INFO: 28581 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", > "2018-09-21 12:25:07,360 DEBUG: 28582 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.27 seconds", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/interface]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Notification/File[event_pipeline]/ensure: defined content as '{md5}e1b13cf3e430a5cacf9cd8ad4704c3b5'", > "Notice: /Stage[main]/Ceilometer::Agent::Notification/Ceilometer_config[notification/ack_on_event_error]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created", > "Notice: Applied catalog in 0.75 seconds", > " Total: 26", > " Success: 26", > " Total: 156", > " Out of sync: 26", > " Changed: 26", > " Skipped: 35", > " Ceilometer config: 0.55", > " Config retrieval: 1.51", > " Last run: 1537532705", > " Total: 2.06", > " Config: 1537532703", > "Gathering files modified after 2018-09-21 12:24:57.697528940 +0000", > "2018-09-21 12:25:07,361 DEBUG: 28582 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config'", > "+ origin_of_time=/var/lib/config-data/ceilometer.origin_of_time", > "+ touch /var/lib/config-data/ceilometer.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/config.pp\", 35]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer.pp\", 111]", > "Warning: Scope(Class[Ceilometer::Dispatcher::Gnocchi]): The class ceilometer::dispatcher::gnocchi is deprecated. All its", > " options must be set as url parameters in", > " ceilometer::agent::notification::pipeline_publishers. Depending of the used", > " Gnocchi version their might be ignored.", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/agent/notification.pp\", 118]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer/agent/notification.pp\", 34]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/ceilometer", > "++ stat -c %y /var/lib/config-data/ceilometer.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:24:57.697528940 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/ceilometer", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/ceilometer", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/ceilometer.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/ceilometer", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/ceilometer --mtime=1970-01-01", > "2018-09-21 12:25:07,361 INFO: 28582 -- Removing container: docker-puppet-ceilometer", > "2018-09-21 12:25:07,399 DEBUG: 28582 -- docker-puppet-ceilometer", > "2018-09-21 12:25:07,400 INFO: 28582 -- Finished processing puppet configs for ceilometer", > "2018-09-21 12:25:09,067 DEBUG: 28581 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight", > "763394b9c1e7: Pulling fs layer", > "055e8a682563: Pulling fs layer", > "5fd420ae7fff: Pulling fs layer", > "5fd420ae7fff: Verifying Checksum", > "5fd420ae7fff: Download complete", > "055e8a682563: Download complete", > "763394b9c1e7: Verifying Checksum", > "763394b9c1e7: Download complete", > "763394b9c1e7: Pull complete", > "055e8a682563: Pull complete", > "5fd420ae7fff: Pull complete", > "Digest: sha256:bc9fc3e332047433fa698e663155925985c4c0834382d5fcd1ec6adffc77277c", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", > "2018-09-21 12:25:09,070 DEBUG: 28581 -- NET_HOST enabled", > "2018-09-21 12:25:09,070 DEBUG: 28581 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-neutron --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 --env NAME=neutron --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmptC4syV:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", > "2018-09-21 12:25:12,180 DEBUG: 28580 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.28 seconds", > "Notice: /Stage[main]/Heat::Api_cfn/Heat_config[heat_api_cfn/bind_host]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}39c3a1db9af73c8f6cc5439492ddc431'", > "Notice: /Stage[main]/Apache::Mod::Headers/Apache::Mod[headers]/File[headers.load]/ensure: defined content as '{md5}96094c96352002c43ada5bdf8650ff38'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/File[/var/www/cgi-bin/heat]/ensure: created", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/File[heat_api_cfn_wsgi]/ensure: defined content as '{md5}c3ae61ab87649c8cdfab8977da2b194b'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/Apache::Vhost[heat_api_cfn_wsgi]/Concat[10-heat_api_cfn_wsgi.conf]/File[/etc/httpd/conf.d/10-heat_api_cfn_wsgi.conf]/ensure: defined content as '{md5}03ebb24cdb8528dae96b95c3e6a7c213'", > "Notice: Applied catalog in 2.64 seconds", > " Total: 122", > " Success: 122", > " Changed: 122", > " Out of sync: 122", > " Total: 338", > " File: 0.34", > " Heat config: 1.57", > " Last run: 1537532710", > " Config retrieval: 4.78", > " Total: 6.76", > " Config: 1537532702", > "Gathering files modified after 2018-09-21 12:24:56.586500613 +0000", > "2018-09-21 12:25:12,180 DEBUG: 28580 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/heat_api_cfn.origin_of_time", > "+ touch /var/lib/config-data/heat_api_cfn.origin_of_time", > " with Stdlib::Compat::Integer. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/wsgi/apache_api_cfn.pp\", 125]:[\"/etc/config.pp\", 2]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat_api_cfn", > "++ stat -c %y /var/lib/config-data/heat_api_cfn.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:24:56.586500613 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat_api_cfn", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat_api_cfn", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat_api_cfn.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/heat_api_cfn", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/heat_api_cfn --mtime=1970-01-01", > "2018-09-21 12:25:12,180 INFO: 28580 -- Removing container: docker-puppet-heat_api_cfn", > "2018-09-21 12:25:12,233 DEBUG: 28580 -- docker-puppet-heat_api_cfn", > "2018-09-21 12:25:12,233 INFO: 28580 -- Finished processing puppet configs for heat_api_cfn", > "2018-09-21 12:25:22,429 DEBUG: 28581 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.13 seconds", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/bind_host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_url]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/username]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/password]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/endpoint_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/tenant_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_status_changes]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_data_changes]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/l3_ha]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/max_l3_agents_per_router]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/api_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/rpc_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_scheduler_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_distributed]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/enable_dvr]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/allow_automatic_l3agent_failover]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_firewall_rule]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_network_gateway]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_packet_filter]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/path_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/extension_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/overlay_ip_version]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_isolated_metadata]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/force_metadata]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_metadata_network]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/resync_interval]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/interface_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_dns_servers]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_local_resolv]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/interface_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/agent_mode]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/drop_flows_on_start]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/extensions]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/tunnel_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/vxlan_udp_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Neutron::Policy/Oslo::Policy[neutron_config]/Neutron_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Oslo::Middleware[neutron_config]/Neutron_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]/ensure: created", > "Notice: Applied catalog in 1.65 seconds", > " Total: 105", > " Success: 105", > " Changed: 105", > " Out of sync: 105", > " Total: 357", > " Skipped: 44", > " Neutron api config: 0.00", > " Neutron l3 agent config: 0.01", > " Neutron agent ovs: 0.01", > " Neutron metadata agent config: 0.02", > " Neutron plugin ml2: 0.03", > " Neutron dhcp agent config: 0.03", > " Neutron config: 1.13", > " Last run: 1537532720", > " Config retrieval: 3.50", > " Total: 4.80", > " Config: 1537532715", > "Gathering files modified after 2018-09-21 12:25:09.250816336 +0000", > "2018-09-21 12:25:22,429 DEBUG: 28581 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2'", > "+ origin_of_time=/var/lib/config-data/neutron.origin_of_time", > "+ touch /var/lib/config-data/neutron.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/neutron/manifests/init.pp\", 486]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/server.pp\", 104]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/config.pp\", 136]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp\", 141]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/db.pp\", 69]:[\"/etc/puppet/modules/neutron/manifests/server.pp\", 290]", > "Warning: Scope(Class[Neutron::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Unknown variable: '::neutron::params::metadata_agent_package'. at /etc/puppet/modules/neutron/manifests/agents/metadata.pp:122:6", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 208]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/ovs.pp\", 59]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/neutron", > "++ stat -c %y /var/lib/config-data/neutron.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:25:09.250816336 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/neutron", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/neutron", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/neutron.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/neutron", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/neutron --mtime=1970-01-01", > "2018-09-21 12:25:22,430 INFO: 28581 -- Removing container: docker-puppet-neutron", > "2018-09-21 12:25:22,472 DEBUG: 28581 -- docker-puppet-neutron", > "2018-09-21 12:25:22,472 INFO: 28581 -- Finished processing puppet configs for neutron", > "2018-09-21 12:25:22,472 INFO: 28581 -- Starting configuration of horizon using image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1", > "2018-09-21 12:25:22,473 DEBUG: 28581 -- config_volume horizon", > "2018-09-21 12:25:22,473 DEBUG: 28581 -- puppet_tags file,file_line,concat,augeas,cron,horizon_config", > "2018-09-21 12:25:22,473 DEBUG: 28581 -- manifest include ::tripleo::profile::base::horizon", > "2018-09-21 12:25:22,473 DEBUG: 28581 -- config_image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1", > "2018-09-21 12:25:22,473 DEBUG: 28581 -- volumes []", > "2018-09-21 12:25:22,473 INFO: 28581 -- Removing container: docker-puppet-horizon", > "2018-09-21 12:25:22,536 INFO: 28581 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1", > "2018-09-21 12:25:27,733 DEBUG: 28581 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-horizon ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-horizon", > "c23d952caf55: Pulling fs layer", > "c23d952caf55: Verifying Checksum", > "c23d952caf55: Download complete", > "c23d952caf55: Pull complete", > "Digest: sha256:e0ab0266811535049c64c233aeb2a27f93b4ce82d4252213669b7dc5280584d3", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1", > "2018-09-21 12:25:27,736 DEBUG: 28581 -- NET_HOST enabled", > "2018-09-21 12:25:27,736 DEBUG: 28581 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-horizon --env PUPPET_TAGS=file,file_line,concat,augeas,cron,horizon_config --env NAME=horizon --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpd77zWC:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1", > "2018-09-21 12:25:39,757 DEBUG: 28581 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.75 seconds", > "Notice: /Stage[main]/Apache::Mod::Remoteip/File[remoteip.conf]/ensure: defined content as '{md5}d559e70a6d7fe898f16cbf1443ca36ea'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon]/mode: mode changed '0750' to '0751'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon/horizon.log]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}c8f9d80ced5f5f2492578adc970926a9'", > "Notice: /Stage[main]/Apache::Mod::Remoteip/Apache::Mod[remoteip]/File[remoteip.load]/ensure: defined content as '{md5}118eb7518a1d018a162d23dfe32c4bad'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/content: content changed '{md5}7dc19f54a8c36f75bf25f13ca2df2160' to '{md5}5ff9307184b1327996e4d26c5f396f4a'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/owner: owner changed 'horizon' to 'apache'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/group: group changed 'horizon' to 'apache'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/etc/httpd/conf.d/openstack-dashboard.conf]/content: content changed '{md5}4cb4b1391d3553951208fad1ce791e5c' to '{md5}3f4b1c53d0e150dae37b3ee5dcaf622d'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[10-horizon_vhost.conf]/File[/etc/httpd/conf.d/10-horizon_vhost.conf]/ensure: defined content as '{md5}367d089a6e2b1a1af9fe53a615f07ed6'", > "Notice: Applied catalog in 0.71 seconds", > " Total: 86", > " Success: 86", > " Total: 172", > " Out of sync: 84", > " Changed: 84", > " Last run: 1537532738", > " Config retrieval: 3.17", > " Total: 3.48", > " Config: 1537532734", > "Gathering files modified after 2018-09-21 12:25:27.934254566 +0000", > "2018-09-21 12:25:39,757 DEBUG: 28581 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,horizon_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,horizon_config'", > "+ origin_of_time=/var/lib/config-data/horizon.origin_of_time", > "+ touch /var/lib/config-data/horizon.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,horizon_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/horizon.pp\", 97]:[\"/etc/config.pp\", 2]", > "Warning: ModuleLoader: module 'horizon' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: Undefined variable ''; ", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 599]:[\"/etc/config.pp\", 2]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 600]:[\"/etc/config.pp\", 2]", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 602]:[\"/etc/config.pp\", 2]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/horizon", > "++ stat -c %y /var/lib/config-data/horizon.origin_of_time", > "+ echo 'Gathering files modified after 2018-09-21 12:25:27.934254566 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/horizon", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/horizon", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/horizon.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/horizon", > "+ tar -c --mtime=1970-01-01 '--exclude=*/etc/swift/backups/*' '--exclude=*/etc/swift/*.ring.gz' '--exclude=*/etc/swift/*.builder' '--exclude=*/etc/libvirt/passwd.db' -f - /var/lib/config-data/puppet-generated/horizon --mtime=1970-01-01", > "2018-09-21 12:25:39,758 INFO: 28581 -- Removing container: docker-puppet-horizon", > "2018-09-21 12:25:39,810 DEBUG: 28581 -- docker-puppet-horizon", > "2018-09-21 12:25:39,811 INFO: 28581 -- Finished processing puppet configs for horizon", > "2018-09-21 12:25:39,811 DEBUG: 28579 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-09-21 12:25:39,811 DEBUG: 28579 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-09-21 12:25:39,814 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-09-21 12:25:39,815 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-09-21 12:25:39,815 DEBUG: 28579 -- Updating config hash for mysql_bootstrap, config_volume=heat_api_cfn hash=5f3d83586de61de02db2f5ee4c182399", > "2018-09-21 12:25:39,815 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-09-21 12:25:39,815 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-09-21 12:25:39,815 DEBUG: 28579 -- Updating config hash for rabbitmq_bootstrap, config_volume=heat_api_cfn hash=76de67b00e9b5c99fcfacdab069913fd", > "2018-09-21 12:25:39,815 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/memcached/etc/sysconfig.md5sum for config_volume /var/lib/config-data/memcached/etc/sysconfig", > "2018-09-21 12:25:39,817 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_placement.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_placement", > "2018-09-21 12:25:39,817 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/nova_placement.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_placement", > "2018-09-21 12:25:39,817 DEBUG: 28579 -- Updating config hash for nova_placement, config_volume=heat_api_cfn hash=b1f68cd00af5ec8ce744f6b62b7ea2b9", > "2018-09-21 12:25:39,817 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-09-21 12:25:39,817 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-09-21 12:25:39,817 DEBUG: 28579 -- Updating config hash for swift_rsync_fix, config_volume=heat_api_cfn hash=e6e4ce85c2fca7025583cca403a44526", > "2018-09-21 12:25:39,817 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-09-21 12:25:39,817 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-09-21 12:25:39,817 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/heat/etc/heat.md5sum for config_volume /var/lib/config-data/heat/etc/heat", > "2018-09-21 12:25:39,817 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/heat/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/heat/etc/my.cnf.d", > "2018-09-21 12:25:39,818 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data.md5sum for config_volume /var/lib/config-data", > "2018-09-21 12:25:39,818 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/swift/etc", > "2018-09-21 12:25:39,818 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-09-21 12:25:39,818 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-09-21 12:25:39,818 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-09-21 12:25:39,818 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-09-21 12:25:39,818 DEBUG: 28579 -- Updating config hash for keystone_cron, config_volume=heat_api_cfn hash=9d10d802b49c0067779ab2f74418c16e", > "2018-09-21 12:25:39,818 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/panko/etc.md5sum for config_volume /var/lib/config-data/panko/etc", > "2018-09-21 12:25:39,818 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/panko/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/panko/etc/my.cnf.d", > "2018-09-21 12:25:39,819 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-09-21 12:25:39,819 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-09-21 12:25:39,819 DEBUG: 28579 -- Updating config hash for keystone_db_sync, config_volume=heat_api_cfn hash=9d10d802b49c0067779ab2f74418c16e", > "2018-09-21 12:25:39,819 DEBUG: 28579 -- Updating config hash for keystone, config_volume=heat_api_cfn hash=9d10d802b49c0067779ab2f74418c16e", > "2018-09-21 12:25:39,819 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/aodh/etc/aodh.md5sum for config_volume /var/lib/config-data/aodh/etc/aodh", > "2018-09-21 12:25:39,819 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/aodh/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/aodh/etc/my.cnf.d", > "2018-09-21 12:25:39,819 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-09-21 12:25:39,819 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-09-21 12:25:39,819 DEBUG: 28579 -- Updating config hash for neutron_ovs_bridge, config_volume=heat_api_cfn hash=c5b9455175da34820a878b5395777a4e", > "2018-09-21 12:25:39,819 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/cinder/etc/cinder.md5sum for config_volume /var/lib/config-data/cinder/etc/cinder", > "2018-09-21 12:25:39,819 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/cinder/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/cinder/etc/my.cnf.d", > "2018-09-21 12:25:39,820 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-09-21 12:25:39,820 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-09-21 12:25:39,820 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-09-21 12:25:39,820 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-09-21 12:25:39,820 DEBUG: 28579 -- Updating config hash for glance_api_db_sync, config_volume=heat_api_cfn hash=7ecebf520a1339b7f948c4fb9a7d6564", > "2018-09-21 12:25:39,820 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/neutron/etc.md5sum for config_volume /var/lib/config-data/neutron/etc", > "2018-09-21 12:25:39,820 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/neutron/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/neutron/etc/my.cnf.d", > "2018-09-21 12:25:39,820 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/neutron/usr/share.md5sum for config_volume /var/lib/config-data/neutron/usr/share", > "2018-09-21 12:25:39,820 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/sahara/etc/sahara.md5sum for config_volume /var/lib/config-data/sahara/etc/sahara", > "2018-09-21 12:25:39,820 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/horizon.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon", > "2018-09-21 12:25:39,820 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/horizon.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon", > "2018-09-21 12:25:39,820 DEBUG: 28579 -- Updating config hash for horizon, config_volume=heat_api_cfn hash=56958ca8dc0d72bf6ff66083bb4bd77d", > "2018-09-21 12:25:39,822 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/clustercheck.md5sum for config_volume /var/lib/config-data/puppet-generated/clustercheck", > "2018-09-21 12:25:39,822 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/clustercheck.md5sum for config_volume /var/lib/config-data/puppet-generated/clustercheck", > "2018-09-21 12:25:39,823 DEBUG: 28579 -- Updating config hash for clustercheck, config_volume=heat_api_cfn hash=f84e1ff7263ac8172abc5d622c3cd163", > "2018-09-21 12:25:39,823 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-09-21 12:25:39,823 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-09-21 12:25:39,823 DEBUG: 28579 -- Updating config hash for mysql_restart_bundle, config_volume=heat_api_cfn hash=5f3d83586de61de02db2f5ee4c182399", > "2018-09-21 12:25:39,823 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/haproxy.md5sum for config_volume /var/lib/config-data/puppet-generated/haproxy", > "2018-09-21 12:25:39,823 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/haproxy.md5sum for config_volume /var/lib/config-data/puppet-generated/haproxy", > "2018-09-21 12:25:39,823 DEBUG: 28579 -- Updating config hash for haproxy_restart_bundle, config_volume=heat_api_cfn hash=9200d9c4be339b9075c4e48ff8a14619", > "2018-09-21 12:25:39,823 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-09-21 12:25:39,823 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-09-21 12:25:39,823 DEBUG: 28579 -- Updating config hash for rabbitmq_restart_bundle, config_volume=heat_api_cfn hash=76de67b00e9b5c99fcfacdab069913fd", > "2018-09-21 12:25:39,823 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/horizon/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon/etc", > "2018-09-21 12:25:39,824 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/redis.md5sum for config_volume /var/lib/config-data/puppet-generated/redis", > "2018-09-21 12:25:39,824 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/redis.md5sum for config_volume /var/lib/config-data/puppet-generated/redis", > "2018-09-21 12:25:39,824 DEBUG: 28579 -- Updating config hash for redis_restart_bundle, config_volume=heat_api_cfn hash=aa8d39baa51d541fbe36c3740c93956a", > "2018-09-21 12:25:39,825 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-09-21 12:25:39,825 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-09-21 12:25:39,826 DEBUG: 28579 -- Updating config hash for cinder_volume_restart_bundle, config_volume=heat_api_cfn hash=a450171ea1111e390191b50f6f094268", > "2018-09-21 12:25:39,826 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-09-21 12:25:39,826 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-09-21 12:25:39,826 DEBUG: 28579 -- Updating config hash for gnocchi_statsd, config_volume=heat_api_cfn hash=335cea2784d6acfa4bbe884998af14d7", > "2018-09-21 12:25:39,826 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-09-21 12:25:39,826 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-09-21 12:25:39,826 DEBUG: 28579 -- Updating config hash for cinder_backup_restart_bundle, config_volume=heat_api_cfn hash=a450171ea1111e390191b50f6f094268", > "2018-09-21 12:25:39,826 DEBUG: 28579 -- Updating config hash for gnocchi_metricd, config_volume=heat_api_cfn hash=335cea2784d6acfa4bbe884998af14d7", > "2018-09-21 12:25:39,826 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-09-21 12:25:39,826 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-09-21 12:25:39,827 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/ceilometer/etc/ceilometer.md5sum for config_volume /var/lib/config-data/ceilometer/etc/ceilometer", > "2018-09-21 12:25:39,827 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-09-21 12:25:39,827 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-09-21 12:25:39,827 DEBUG: 28579 -- Updating config hash for gnocchi_api, config_volume=heat_api_cfn hash=335cea2784d6acfa4bbe884998af14d7", > "2018-09-21 12:25:39,827 DEBUG: 28579 -- Updating config hash for gnocchi_db_sync, config_volume=heat_api_cfn hash=335cea2784d6acfa4bbe884998af14d7", > "2018-09-21 12:25:39,829 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-09-21 12:25:39,829 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-09-21 12:25:39,829 DEBUG: 28579 -- Updating config hash for swift_container_updater, config_volume=heat_api_cfn hash=e6e4ce85c2fca7025583cca403a44526", > "2018-09-21 12:25:39,830 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-09-21 12:25:39,830 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-09-21 12:25:39,830 DEBUG: 28579 -- Updating config hash for aodh_evaluator, config_volume=heat_api_cfn hash=9d19f3dd04de275e376f03a48e1fba58", > "2018-09-21 12:25:39,830 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-09-21 12:25:39,830 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-09-21 12:25:39,830 DEBUG: 28579 -- Updating config hash for nova_scheduler, config_volume=heat_api_cfn hash=541aad180b0267884d87930e59bb07ca", > "2018-09-21 12:25:39,830 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-09-21 12:25:39,830 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-09-21 12:25:39,830 DEBUG: 28579 -- Updating config hash for swift_object_server, config_volume=heat_api_cfn hash=e6e4ce85c2fca7025583cca403a44526", > "2018-09-21 12:25:39,830 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-09-21 12:25:39,830 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-09-21 12:25:39,830 DEBUG: 28579 -- Updating config hash for cinder_api, config_volume=heat_api_cfn hash=a450171ea1111e390191b50f6f094268", > "2018-09-21 12:25:39,831 DEBUG: 28579 -- Updating config hash for swift_proxy, config_volume=heat_api_cfn hash=e6e4ce85c2fca7025583cca403a44526", > "2018-09-21 12:25:39,831 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-09-21 12:25:39,831 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-09-21 12:25:39,831 DEBUG: 28579 -- Updating config hash for neutron_dhcp, config_volume=heat_api_cfn hash=c5b9455175da34820a878b5395777a4e", > "2018-09-21 12:25:39,831 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-09-21 12:25:39,831 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-09-21 12:25:39,831 DEBUG: 28579 -- Updating config hash for heat_api, config_volume=heat_api_cfn hash=739f90a33781143d36be082eb446a89e", > "2018-09-21 12:25:39,831 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-09-21 12:25:39,831 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-09-21 12:25:39,831 DEBUG: 28579 -- Updating config hash for swift_object_auditor, config_volume=heat_api_cfn hash=e6e4ce85c2fca7025583cca403a44526", > "2018-09-21 12:25:39,831 DEBUG: 28579 -- Updating config hash for neutron_metadata_agent, config_volume=heat_api_cfn hash=c5b9455175da34820a878b5395777a4e", > "2018-09-21 12:25:39,832 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-09-21 12:25:39,832 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-09-21 12:25:39,832 DEBUG: 28579 -- Updating config hash for ceilometer_agent_central, config_volume=heat_api_cfn hash=ea9378faf2d68d08abde73e6865c9cc8", > "2018-09-21 12:25:39,832 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-09-21 12:25:39,832 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-09-21 12:25:39,832 DEBUG: 28579 -- Updating config hash for swift_account_replicator, config_volume=heat_api_cfn hash=e6e4ce85c2fca7025583cca403a44526", > "2018-09-21 12:25:39,832 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-09-21 12:25:39,832 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-09-21 12:25:39,832 DEBUG: 28579 -- Updating config hash for aodh_notifier, config_volume=heat_api_cfn hash=9d19f3dd04de275e376f03a48e1fba58", > "2018-09-21 12:25:39,832 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-09-21 12:25:39,832 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-09-21 12:25:39,832 DEBUG: 28579 -- Updating config hash for nova_api_cron, config_volume=heat_api_cfn hash=541aad180b0267884d87930e59bb07ca", > "2018-09-21 12:25:39,833 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-09-21 12:25:39,833 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-09-21 12:25:39,833 DEBUG: 28579 -- Updating config hash for nova_consoleauth, config_volume=heat_api_cfn hash=541aad180b0267884d87930e59bb07ca", > "2018-09-21 12:25:39,833 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-09-21 12:25:39,833 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-09-21 12:25:39,833 DEBUG: 28579 -- Updating config hash for glance_api, config_volume=heat_api_cfn hash=7ecebf520a1339b7f948c4fb9a7d6564", > "2018-09-21 12:25:39,833 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-09-21 12:25:39,833 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-09-21 12:25:39,833 DEBUG: 28579 -- Updating config hash for swift_account_reaper, config_volume=heat_api_cfn hash=e6e4ce85c2fca7025583cca403a44526", > "2018-09-21 12:25:39,833 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-09-21 12:25:39,833 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-09-21 12:25:39,833 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-09-21 12:25:39,833 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-09-21 12:25:39,833 DEBUG: 28579 -- Updating config hash for ceilometer_agent_notification, config_volume=heat_api_cfn hash=ea9378faf2d68d08abde73e6865c9cc8-998620b8a5d93d3ad78af9c2bf4f0f41", > "2018-09-21 12:25:39,834 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-09-21 12:25:39,834 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-09-21 12:25:39,834 DEBUG: 28579 -- Updating config hash for nova_vnc_proxy, config_volume=heat_api_cfn hash=541aad180b0267884d87930e59bb07ca", > "2018-09-21 12:25:39,834 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-09-21 12:25:39,834 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-09-21 12:25:39,834 DEBUG: 28579 -- Updating config hash for swift_rsync, config_volume=heat_api_cfn hash=e6e4ce85c2fca7025583cca403a44526", > "2018-09-21 12:25:39,834 DEBUG: 28579 -- Updating config hash for nova_api, config_volume=heat_api_cfn hash=541aad180b0267884d87930e59bb07ca", > "2018-09-21 12:25:39,834 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-09-21 12:25:39,834 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-09-21 12:25:39,834 DEBUG: 28579 -- Updating config hash for aodh_api, config_volume=heat_api_cfn hash=9d19f3dd04de275e376f03a48e1fba58", > "2018-09-21 12:25:39,835 DEBUG: 28579 -- Updating config hash for nova_metadata, config_volume=heat_api_cfn hash=541aad180b0267884d87930e59bb07ca", > "2018-09-21 12:25:39,835 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat.md5sum for config_volume /var/lib/config-data/puppet-generated/heat", > "2018-09-21 12:25:39,835 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/heat.md5sum for config_volume /var/lib/config-data/puppet-generated/heat", > "2018-09-21 12:25:39,835 DEBUG: 28579 -- Updating config hash for heat_engine, config_volume=heat_api_cfn hash=3056ef010fb6e2fffff50770bc19f3f7", > "2018-09-21 12:25:39,835 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-09-21 12:25:39,835 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-09-21 12:25:39,835 DEBUG: 28579 -- Updating config hash for swift_container_server, config_volume=heat_api_cfn hash=e6e4ce85c2fca7025583cca403a44526", > "2018-09-21 12:25:39,835 DEBUG: 28579 -- Updating config hash for swift_object_replicator, config_volume=heat_api_cfn hash=e6e4ce85c2fca7025583cca403a44526", > "2018-09-21 12:25:39,835 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-09-21 12:25:39,835 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-09-21 12:25:39,835 DEBUG: 28579 -- Updating config hash for neutron_l3_agent, config_volume=heat_api_cfn hash=c5b9455175da34820a878b5395777a4e", > "2018-09-21 12:25:39,836 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-09-21 12:25:39,836 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-09-21 12:25:39,836 DEBUG: 28579 -- Updating config hash for cinder_scheduler, config_volume=heat_api_cfn hash=a450171ea1111e390191b50f6f094268", > "2018-09-21 12:25:39,836 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-09-21 12:25:39,836 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-09-21 12:25:39,836 DEBUG: 28579 -- Updating config hash for nova_conductor, config_volume=heat_api_cfn hash=541aad180b0267884d87930e59bb07ca", > "2018-09-21 12:25:39,836 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api_cfn.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api_cfn", > "2018-09-21 12:25:39,836 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api_cfn.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api_cfn", > "2018-09-21 12:25:39,836 DEBUG: 28579 -- Updating config hash for heat_api_cfn, config_volume=heat_api_cfn hash=c1f4ee759f41dbf0776c07d6c94159ee", > "2018-09-21 12:25:39,836 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/sahara.md5sum for config_volume /var/lib/config-data/puppet-generated/sahara", > "2018-09-21 12:25:39,836 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/sahara.md5sum for config_volume /var/lib/config-data/puppet-generated/sahara", > "2018-09-21 12:25:39,836 DEBUG: 28579 -- Updating config hash for sahara_api, config_volume=heat_api_cfn hash=3471575b8433343bdb96696e3b5509df", > "2018-09-21 12:25:39,837 DEBUG: 28579 -- Updating config hash for sahara_engine, config_volume=heat_api_cfn hash=3471575b8433343bdb96696e3b5509df", > "2018-09-21 12:25:39,837 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-09-21 12:25:39,837 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-09-21 12:25:39,837 DEBUG: 28579 -- Updating config hash for neutron_ovs_agent, config_volume=heat_api_cfn hash=c5b9455175da34820a878b5395777a4e", > "2018-09-21 12:25:39,837 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-09-21 12:25:39,837 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-09-21 12:25:39,837 DEBUG: 28579 -- Updating config hash for cinder_api_cron, config_volume=heat_api_cfn hash=a450171ea1111e390191b50f6f094268", > "2018-09-21 12:25:39,837 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-09-21 12:25:39,837 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-09-21 12:25:39,837 DEBUG: 28579 -- Updating config hash for swift_account_auditor, config_volume=heat_api_cfn hash=e6e4ce85c2fca7025583cca403a44526", > "2018-09-21 12:25:39,837 DEBUG: 28579 -- Updating config hash for swift_container_replicator, config_volume=heat_api_cfn hash=e6e4ce85c2fca7025583cca403a44526", > "2018-09-21 12:25:39,838 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-09-21 12:25:39,838 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-09-21 12:25:39,838 DEBUG: 28579 -- Updating config hash for swift_object_updater, config_volume=heat_api_cfn hash=e6e4ce85c2fca7025583cca403a44526", > "2018-09-21 12:25:39,838 DEBUG: 28579 -- Updating config hash for swift_object_expirer, config_volume=heat_api_cfn hash=e6e4ce85c2fca7025583cca403a44526", > "2018-09-21 12:25:39,838 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-09-21 12:25:39,838 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-09-21 12:25:39,838 DEBUG: 28579 -- Updating config hash for heat_api_cron, config_volume=heat_api_cfn hash=739f90a33781143d36be082eb446a89e", > "2018-09-21 12:25:39,838 DEBUG: 28579 -- Updating config hash for swift_container_auditor, config_volume=heat_api_cfn hash=e6e4ce85c2fca7025583cca403a44526", > "2018-09-21 12:25:39,838 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-09-21 12:25:39,838 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-09-21 12:25:39,839 DEBUG: 28579 -- Updating config hash for panko_api, config_volume=heat_api_cfn hash=998620b8a5d93d3ad78af9c2bf4f0f41", > "2018-09-21 12:25:39,839 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-09-21 12:25:39,839 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-09-21 12:25:39,839 DEBUG: 28579 -- Updating config hash for aodh_listener, config_volume=heat_api_cfn hash=9d19f3dd04de275e376f03a48e1fba58", > "2018-09-21 12:25:39,839 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-09-21 12:25:39,839 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-09-21 12:25:39,839 DEBUG: 28579 -- Updating config hash for neutron_api, config_volume=heat_api_cfn hash=c5b9455175da34820a878b5395777a4e", > "2018-09-21 12:25:39,839 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-09-21 12:25:39,839 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-09-21 12:25:39,839 DEBUG: 28579 -- Updating config hash for swift_account_server, config_volume=heat_api_cfn hash=e6e4ce85c2fca7025583cca403a44526", > "2018-09-21 12:25:39,839 DEBUG: 28579 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-09-21 12:25:39,839 DEBUG: 28579 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-09-21 12:25:39,839 DEBUG: 28579 -- Updating config hash for logrotate_crond, config_volume=heat_api_cfn hash=6f2a5e23a896d70ebbc2c66d87cd9266" > ] >} > >TASK [Start containers for step 1] ********************************************* >Friday 21 September 2018 08:25:41 -0400 (0:00:01.445) 0:09:03.984 ****** >ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > > >TASK [Debug output for task which failed: Start containers for step 1] ********* >Friday 21 September 2018 08:26:10 -0400 (0:00:28.813) 0:09:32.798 ****** >ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-backup ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-backup", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "86a0e618a180: Already exists", > "dfa58d50e0a3: Already exists", > "020a12d8eacf: Already exists", > "8046f97e0c2c: Pulling fs layer", > "8046f97e0c2c: Download complete", > "8046f97e0c2c: Pull complete", > "Digest: sha256:45d994f44974e2cf2f39795b63c0f801de71f3fc49e094cf7e84164c9612162e", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-volume ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-volume", > "0843b29339be: Pulling fs layer", > "0843b29339be: Verifying Checksum", > "0843b29339be: Download complete", > "0843b29339be: Pull complete", > "Digest: sha256:66dc71cfc7bd0553697cc551929493397dc65554836ac59c55de6b8069626701", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1", > "stdout: ", > "stdout: 5c6f0b86757cd940b4961bde724af115eb3590d743f1b51cdbee69c0c6b5ec50", > "stdout: 44080d648aeefa2351b16cb6db45ef364672cc80b138330a105226a630041d4a", > "stdout: Installing MariaDB/MySQL system tables in '/var/lib/mysql' ...", > "OK", > "Filling help tables...", > "Creating OpenGIS required SP-s...", > "To start mysqld at boot time you have to copy", > "support-files/mysql.server to the right place for your system", > "PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER !", > "To do so, start the server, then issue the following commands:", > "'/usr/bin/mysqladmin' -u root password 'new-password'", > "'/usr/bin/mysqladmin' -u root -h controller-0 password 'new-password'", > "Alternatively you can run:", > "'/usr/bin/mysql_secure_installation'", > "which will also give you the option of removing the test", > "databases and anonymous user created by default. This is", > "strongly recommended for production servers.", > "See the MariaDB Knowledgebase at http://mariadb.com/kb or the", > "MySQL manual for more instructions.", > "You can start the MariaDB daemon with:", > "cd '/usr' ; /usr/bin/mysqld_safe --datadir='/var/lib/mysql'", > "You can test the MariaDB daemon with mysql-test-run.pl", > "cd '/usr/mysql-test' ; perl mysql-test-run.pl", > "Please report any problems at http://mariadb.org/jira", > "The latest information about MariaDB is available at http://mariadb.org/.", > "You can find additional information about the MySQL part at:", > "http://dev.mysql.com", > "Consider joining MariaDB's strong and vibrant community:", > "https://mariadb.org/get-involved/", > "180921 12:26:01 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.", > "180921 12:26:01 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql", > "spawn mysql_secure_installation", > "NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB", > " SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!", > "In order to log into MariaDB to secure it, we'll need the current", > "password for the root user. If you've just installed MariaDB, and", > "you haven't set the root password yet, the password will be blank,", > "so you should just press enter here.", > "Enter current password for root (enter for none): ", > "OK, successfully used password, moving on...", > "Setting the root password ensures that nobody can log into the MariaDB", > "root user without the proper authorisation.", > "Set root password? [Y/n] y", > "New password: ", > "Re-enter new password: ", > "Password updated successfully!", > "Reloading privilege tables..", > " ... Success!", > "By default, a MariaDB installation has an anonymous user, allowing anyone", > "to log into MariaDB without having to have a user account created for", > "them. This is intended only for testing, and to make the installation", > "go a bit smoother. You should remove them before moving into a", > "production environment.", > "Remove anonymous users? [Y/n] y", > "Normally, root should only be allowed to connect from 'localhost'. This", > "ensures that someone cannot guess at the root password from the network.", > "Disallow root login remotely? [Y/n] n", > " ... skipping.", > "By default, MariaDB comes with a database named 'test' that anyone can", > "access. This is also intended only for testing, and should be removed", > "before moving into a production environment.", > "Remove test database and access to it? [Y/n] y", > " - Dropping test database...", > " - Removing privileges on test database...", > "Reloading the privilege tables will ensure that all changes made so far", > "will take effect immediately.", > "Reload privilege tables now? [Y/n] y", > "Cleaning up...", > "All done! If you've completed all of the above steps, your MariaDB", > "installation should now be secure.", > "Thanks for using MariaDB!", > "180921 12:26:04 mysqld_safe mysqld from pid file /var/lib/mysql/mariadb.pid ended", > "180921 12:26:05 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.", > "180921 12:26:05 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql", > "mysqld is alive", > "180921 12:26:08 mysqld_safe mysqld from pid file /var/lib/mysql/mariadb.pid ended", > "stderr: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", > "INFO:__main__:Validating config file", > "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", > "INFO:__main__:Copying service configuration files", > "INFO:__main__:Copying /dev/null to /etc/libqb/force-filesystem-sockets", > "INFO:__main__:Setting permission for /etc/libqb/force-filesystem-sockets", > "INFO:__main__:Deleting /etc/my.cnf.d/galera.cnf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/my.cnf.d/galera.cnf to /etc/my.cnf.d/galera.cnf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/sysconfig/clustercheck to /etc/sysconfig/clustercheck", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/root/.my.cnf to /root/.my.cnf", > "INFO:__main__:Writing out command to execute", > "2018-09-21 12:25:48 140143147890880 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-09-21 12:25:48 140143147890880 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 42 ...", > "2018-09-21 12:25:52 140233177684160 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-09-21 12:25:52 140233177684160 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 71 ...", > "2018-09-21 12:25:57 140709830551744 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-09-21 12:25:57 140709830551744 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 101 ...", > "/usr/bin/mysqld_safe: line 755: ulimit: -1: invalid option", > "ulimit: usage: ulimit [-SHacdefilmnpqrstuvx] [limit]" > ] >} >ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} > >TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks1.json exists] ******** >Friday 21 September 2018 08:26:10 -0400 (0:00:00.172) 0:09:32.971 ****** >ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Run docker-puppet tasks (bootstrap tasks) for step 1] ******************** >Friday 21 September 2018 08:26:10 -0400 (0:00:00.294) 0:09:33.265 ****** >skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > >TASK [Debug output for task which failed: Run docker-puppet tasks (bootstrap tasks) for step 1] *** >Friday 21 September 2018 08:26:10 -0400 (0:00:00.124) 0:09:33.389 ****** >skipping: [controller-0] => {} >skipping: [compute-0] => {} >skipping: [ceph-0] => {} > >PLAY [External deployment step 2] ********************************************** > >TASK [set blacklisted_hostnames] *********************************************** >Friday 21 September 2018 08:26:11 -0400 (0:00:00.104) 0:09:33.494 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [create ceph-ansible temp dirs] ******************************************* >Friday 21 September 2018 08:26:11 -0400 (0:00:00.035) 0:09:33.529 ****** >skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/group_vars) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/group_vars", "skip_reason": "Conditional result was False"} >skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/host_vars) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/host_vars", "skip_reason": "Conditional result was False"} >skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/fetch_dir) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/fetch_dir", "skip_reason": "Conditional result was False"} > >TASK [generate inventory] ****************************************************** >Friday 21 September 2018 08:26:11 -0400 (0:00:00.058) 0:09:33.587 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible group vars all] ***************************************** >Friday 21 September 2018 08:26:11 -0400 (0:00:00.034) 0:09:33.622 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate ceph-ansible group vars all] ************************************ >Friday 21 September 2018 08:26:11 -0400 (0:00:00.035) 0:09:33.657 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible extra vars] ********************************************* >Friday 21 September 2018 08:26:11 -0400 (0:00:00.034) 0:09:33.692 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate ceph-ansible extra vars] **************************************** >Friday 21 September 2018 08:26:11 -0400 (0:00:00.036) 0:09:33.729 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate nodes-uuid data file] ******************************************* >Friday 21 September 2018 08:26:11 -0400 (0:00:00.044) 0:09:33.773 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate nodes-uuid playbook] ******************************************** >Friday 21 September 2018 08:26:11 -0400 (0:00:00.033) 0:09:33.807 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [run nodes-uuid] ********************************************************** >Friday 21 September 2018 08:26:11 -0400 (0:00:00.032) 0:09:33.839 ****** >changed: [undercloud] => {"changed": true, "cmd": "ANSIBLE_LOG_PATH=\"/var/lib/mistral/overcloud/ceph-ansible/nodes_uuid_command.log\" ANSIBLE_CONFIG=\"/var/lib/mistral/overcloud/ansible.cfg\" ANSIBLE_REMOTE_TEMP=/tmp/nodes_uuid_tmp ansible-playbook --private-key /var/lib/mistral/overcloud/ssh_private_key -i /var/lib/mistral/overcloud/ceph-ansible/inventory.yml /var/lib/mistral/overcloud/ceph-ansible/nodes_uuid_playbook.yml", "delta": "0:00:02.444684", "end": "2018-09-21 08:26:13.945384", "rc": 0, "start": "2018-09-21 08:26:11.500700", "stderr": "", "stderr_lines": [], "stdout": "\nPLAY [all] *********************************************************************\n\nTASK [set nodes data] **********************************************************\nFriday 21 September 2018 08:26:12 -0400 (0:00:00.084) 0:00:00.084 ****** \nok: [ceph-0]\nok: [compute-0]\nok: [controller-0]\n\nTASK [register machine id] *****************************************************\nFriday 21 September 2018 08:26:12 -0400 (0:00:00.068) 0:00:00.153 ****** \nchanged: [compute-0]\nchanged: [ceph-0]\nchanged: [controller-0]\n\nTASK [generate host vars from nodes data] **************************************\nFriday 21 September 2018 08:26:13 -0400 (0:00:00.325) 0:00:00.479 ****** \nchanged: [ceph-0 -> localhost]\nchanged: [compute-0 -> localhost]\nchanged: [controller-0 -> localhost]\n\nPLAY RECAP *********************************************************************\nceph-0 : ok=3 changed=2 unreachable=0 failed=0 \ncompute-0 : ok=3 changed=2 unreachable=0 failed=0 \ncontroller-0 : ok=3 changed=2 unreachable=0 failed=0 \n\nFriday 21 September 2018 08:26:13 -0400 (0:00:00.645) 0:00:01.125 ****** \n=============================================================================== ", "stdout_lines": ["", "PLAY [all] *********************************************************************", "", "TASK [set nodes data] **********************************************************", "Friday 21 September 2018 08:26:12 -0400 (0:00:00.084) 0:00:00.084 ****** ", "ok: [ceph-0]", "ok: [compute-0]", "ok: [controller-0]", "", "TASK [register machine id] *****************************************************", "Friday 21 September 2018 08:26:12 -0400 (0:00:00.068) 0:00:00.153 ****** ", "changed: [compute-0]", "changed: [ceph-0]", "changed: [controller-0]", "", "TASK [generate host vars from nodes data] **************************************", "Friday 21 September 2018 08:26:13 -0400 (0:00:00.325) 0:00:00.479 ****** ", "changed: [ceph-0 -> localhost]", "changed: [compute-0 -> localhost]", "changed: [controller-0 -> localhost]", "", "PLAY RECAP *********************************************************************", "ceph-0 : ok=3 changed=2 unreachable=0 failed=0 ", "compute-0 : ok=3 changed=2 unreachable=0 failed=0 ", "controller-0 : ok=3 changed=2 unreachable=0 failed=0 ", "", "Friday 21 September 2018 08:26:13 -0400 (0:00:00.645) 0:00:01.125 ****** ", "=============================================================================== "]} > >TASK [set ceph-ansible params from Heat] *************************************** >Friday 21 September 2018 08:26:13 -0400 (0:00:02.624) 0:09:36.464 ****** >ok: [undercloud] => {"ansible_facts": {"ceph_ansible_playbook_verbosity": 2, "ceph_ansible_playbooks_param": ["default"]}, "changed": false} > >TASK [set ceph-ansible playbooks] ********************************************** >Friday 21 September 2018 08:26:14 -0400 (0:00:00.048) 0:09:36.513 ****** >ok: [undercloud] => {"ansible_facts": {"ceph_ansible_playbooks": ["/usr/share/ceph-ansible/site-docker.yml.sample"]}, "changed": false} > >TASK [set ceph-ansible command] ************************************************ >Friday 21 September 2018 08:26:14 -0400 (0:00:00.048) 0:09:36.562 ****** >ok: [undercloud] => {"ansible_facts": {"ceph_ansible_command": "ANSIBLE_ACTION_PLUGINS=/usr/share/ceph-ansible/plugins/actions/ ANSIBLE_CALLBACK_PLUGINS=/usr/share/ceph-ansible/plugins/callback/ ANSIBLE_ROLES_PATH=/usr/share/ceph-ansible/roles/ ANSIBLE_LOG_PATH=\"/var/lib/mistral/overcloud/ceph-ansible/ceph_ansible_command.log\" ANSIBLE_LIBRARY=/usr/share/ceph-ansible/library/ ANSIBLE_CONFIG=/usr/share/ceph-ansible/ansible.cfg ANSIBLE_REMOTE_TEMP=/tmp/ceph_ansible_tmp ANSIBLE_FORKS=25 ansible-playbook --private-key /var/lib/mistral/overcloud/ssh_private_key -vv --skip-tags package-install,with_pkg -i /var/lib/mistral/overcloud/ceph-ansible/inventory.yml --extra-vars @/var/lib/mistral/overcloud/ceph-ansible/extra_vars.yml"}, "changed": false} > >TASK [run ceph-ansible] ******************************************************** >Friday 21 September 2018 08:26:14 -0400 (0:00:00.050) 0:09:36.612 ****** >changed: [undercloud] => (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": true, "cmd": "ANSIBLE_ACTION_PLUGINS=/usr/share/ceph-ansible/plugins/actions/ ANSIBLE_CALLBACK_PLUGINS=/usr/share/ceph-ansible/plugins/callback/ ANSIBLE_ROLES_PATH=/usr/share/ceph-ansible/roles/ ANSIBLE_LOG_PATH=\"/var/lib/mistral/overcloud/ceph-ansible/ceph_ansible_command.log\" ANSIBLE_LIBRARY=/usr/share/ceph-ansible/library/ ANSIBLE_CONFIG=/usr/share/ceph-ansible/ansible.cfg ANSIBLE_REMOTE_TEMP=/tmp/ceph_ansible_tmp ANSIBLE_FORKS=25 ansible-playbook --private-key /var/lib/mistral/overcloud/ssh_private_key -vv --skip-tags package-install,with_pkg -i /var/lib/mistral/overcloud/ceph-ansible/inventory.yml --extra-vars @/var/lib/mistral/overcloud/ceph-ansible/extra_vars.yml /usr/share/ceph-ansible/site-docker.yml.sample", "delta": "0:04:16.181844", "end": "2018-09-21 08:30:30.457127", "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "rc": 0, "start": "2018-09-21 08:26:14.275283", "stderr": "[DEPRECATION WARNING]: The use of 'static' has been deprecated. Use \n'import_tasks' for static inclusion, or 'include_tasks' for dynamic inclusion. \nThis feature will be removed in a future release. Deprecation warnings can be \ndisabled by setting deprecation_warnings=False in ansible.cfg.\n [WARNING]: Could not match supplied host pattern, ignoring: agents\n [WARNING]: Could not match supplied host pattern, ignoring: mdss\n [WARNING]: Could not match supplied host pattern, ignoring: rgws\n [WARNING]: Could not match supplied host pattern, ignoring: nfss\n [WARNING]: Could not match supplied host pattern, ignoring: restapis\n [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors\n [WARNING]: Could not match supplied host pattern, ignoring: iscsigws\n [WARNING]: Could not match supplied host pattern, ignoring: iscsi-gws\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n [WARNING]: when statements should not include jinja2 templating delimiters\nsuch as {{ }} or {% %}. Found: {{ inventory_hostname ==\ngroups[mon_group_name][0] }}\n [WARNING]: when statements should not include jinja2 templating delimiters\nsuch as {{ }} or {% %}. Found: {{ inventory_hostname ==\ngroups[mon_group_name][0] }}\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n [WARNING]: when statements should not include jinja2 templating delimiters\nsuch as {{ }} or {% %}. Found: {{ groups.get(mgr_group_name, []) | length > 0\n}}\n [WARNING]: when statements should not include jinja2 templating delimiters\nsuch as {{ }} or {% %}. Found: {{ groups.get(mgr_group_name, []) | length > 0\n}}\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.", "stderr_lines": ["[DEPRECATION WARNING]: The use of 'static' has been deprecated. Use ", "'import_tasks' for static inclusion, or 'include_tasks' for dynamic inclusion. ", "This feature will be removed in a future release. Deprecation warnings can be ", "disabled by setting deprecation_warnings=False in ansible.cfg.", " [WARNING]: Could not match supplied host pattern, ignoring: agents", " [WARNING]: Could not match supplied host pattern, ignoring: mdss", " [WARNING]: Could not match supplied host pattern, ignoring: rgws", " [WARNING]: Could not match supplied host pattern, ignoring: nfss", " [WARNING]: Could not match supplied host pattern, ignoring: restapis", " [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors", " [WARNING]: Could not match supplied host pattern, ignoring: iscsigws", " [WARNING]: Could not match supplied host pattern, ignoring: iscsi-gws", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", " [WARNING]: when statements should not include jinja2 templating delimiters", "such as {{ }} or {% %}. Found: {{ inventory_hostname ==", "groups[mon_group_name][0] }}", " [WARNING]: when statements should not include jinja2 templating delimiters", "such as {{ }} or {% %}. Found: {{ inventory_hostname ==", "groups[mon_group_name][0] }}", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", " [WARNING]: when statements should not include jinja2 templating delimiters", "such as {{ }} or {% %}. Found: {{ groups.get(mgr_group_name, []) | length > 0", "}}", " [WARNING]: when statements should not include jinja2 templating delimiters", "such as {{ }} or {% %}. Found: {{ groups.get(mgr_group_name, []) | length > 0", "}}", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg."], "stdout": "ansible-playbook 2.5.7\n config file = /usr/share/ceph-ansible/ansible.cfg\n configured module search path = [u'/usr/share/ceph-ansible/library']\n ansible python module location = /usr/lib/python2.7/site-packages/ansible\n executable location = /usr/bin/ansible-playbook\n python version = 2.7.5 (default, May 31 2018, 09:41:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]\nUsing /usr/share/ceph-ansible/ansible.cfg as config file\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/non_containerized.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/containerized.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rgw/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/start_rbd_mirror.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/configure_mirroring.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/start_docker_rbd_mirror.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/start_restapi.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/copy_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/start_docker_restapi.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml\n\nPLAYBOOK: site-docker.yml.sample ***********************************************\n12 plays in /usr/share/ceph-ansible/site-docker.yml.sample\n\nPLAY [mons,agents,osds,mdss,rgws,nfss,restapis,rbdmirrors,clients,iscsigws,iscsi-gws,mgrs] ***\n\nTASK [gather facts] ************************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:25\nFriday 21 September 2018 08:26:18 -0400 (0:00:00.240) 0:00:00.240 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [gather and delegate facts] ***********************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:30\nFriday 21 September 2018 08:26:18 -0400 (0:00:00.098) 0:00:00.338 ****** \nok: [controller-0 -> 192.168.24.18] => (item=controller-0)\nok: [controller-0 -> 192.168.24.8] => (item=compute-0)\nok: [controller-0 -> 192.168.24.6] => (item=ceph-0)\n\nTASK [check if it is atomic host] **********************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:39\nFriday 21 September 2018 08:26:31 -0400 (0:00:13.319) 0:00:13.658 ****** \nok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\nok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\nok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [set_fact is_atomic] ******************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:46\nFriday 21 September 2018 08:26:32 -0400 (0:00:00.509) 0:00:14.167 ****** \nok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nTASK [pull rhceph image] *******************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:67\nFriday 21 September 2018 08:26:32 -0400 (0:00:00.164) 0:00:14.331 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nMETA: ran handlers\n\nPLAY [mons] ********************************************************************\nMETA: ran handlers\n\nTASK [set ceph monitor install 'In Progress'] **********************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:77\nFriday 21 September 2018 08:26:32 -0400 (0:00:00.144) 0:00:14.476 ****** \nok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"start\": \"20180921082632Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nPLAY [mons] ********************************************************************\nMETA: ran handlers\n\nTASK [ceph-defaults : check for a mon container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2\nFriday 21 September 2018 08:26:32 -0400 (0:00:00.289) 0:00:14.765 ****** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.029722\", \"end\": \"2018-09-21 12:26:33.069536\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:26:33.039814\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for an osd container] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11\nFriday 21 September 2018 08:26:33 -0400 (0:00:00.469) 0:00:15.235 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mds container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20\nFriday 21 September 2018 08:26:33 -0400 (0:00:00.052) 0:00:15.288 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rgw container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29\nFriday 21 September 2018 08:26:33 -0400 (0:00:00.052) 0:00:15.340 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mgr container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38\nFriday 21 September 2018 08:26:33 -0400 (0:00:00.236) 0:00:15.577 ****** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.023782\", \"end\": \"2018-09-21 12:26:33.687706\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:26:33.663924\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for a rbd mirror container] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47\nFriday 21 September 2018 08:26:33 -0400 (0:00:00.275) 0:00:15.853 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a nfs container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56\nFriday 21 September 2018 08:26:33 -0400 (0:00:00.053) 0:00:15.906 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mon socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2\nFriday 21 September 2018 08:26:33 -0400 (0:00:00.056) 0:00:15.962 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mon socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11\nFriday 21 September 2018 08:26:33 -0400 (0:00:00.052) 0:00:16.015 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21\nFriday 21 September 2018 08:26:33 -0400 (0:00:00.052) 0:00:16.067 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph osd socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30\nFriday 21 September 2018 08:26:34 -0400 (0:00:00.049) 0:00:16.116 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph osd socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40\nFriday 21 September 2018 08:26:34 -0400 (0:00:00.045) 0:00:16.162 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50\nFriday 21 September 2018 08:26:34 -0400 (0:00:00.046) 0:00:16.208 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mds socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59\nFriday 21 September 2018 08:26:34 -0400 (0:00:00.045) 0:00:16.254 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mds socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69\nFriday 21 September 2018 08:26:34 -0400 (0:00:00.047) 0:00:16.301 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79\nFriday 21 September 2018 08:26:34 -0400 (0:00:00.046) 0:00:16.348 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rgw socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88\nFriday 21 September 2018 08:26:34 -0400 (0:00:00.047) 0:00:16.395 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98\nFriday 21 September 2018 08:26:34 -0400 (0:00:00.044) 0:00:16.440 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108\nFriday 21 September 2018 08:26:34 -0400 (0:00:00.044) 0:00:16.485 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mgr socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117\nFriday 21 September 2018 08:26:34 -0400 (0:00:00.046) 0:00:16.532 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127\nFriday 21 September 2018 08:26:34 -0400 (0:00:00.051) 0:00:16.583 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137\nFriday 21 September 2018 08:26:34 -0400 (0:00:00.045) 0:00:16.628 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rbd mirror socket] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146\nFriday 21 September 2018 08:26:34 -0400 (0:00:00.046) 0:00:16.675 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156\nFriday 21 September 2018 08:26:34 -0400 (0:00:00.053) 0:00:16.728 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166\nFriday 21 September 2018 08:26:34 -0400 (0:00:00.053) 0:00:16.782 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175\nFriday 21 September 2018 08:26:34 -0400 (0:00:00.057) 0:00:16.839 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184\nFriday 21 September 2018 08:26:34 -0400 (0:00:00.054) 0:00:16.893 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194\nFriday 21 September 2018 08:26:34 -0400 (0:00:00.053) 0:00:16.947 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if it is atomic host] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2\nFriday 21 September 2018 08:26:34 -0400 (0:00:00.053) 0:00:17.001 ****** \nok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact is_atomic] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7\nFriday 21 September 2018 08:26:35 -0400 (0:00:00.239) 0:00:17.240 ****** \nok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11\nFriday 21 September 2018 08:26:35 -0400 (0:00:00.081) 0:00:17.321 ****** \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17\nFriday 21 September 2018 08:26:35 -0400 (0:00:00.081) 0:00:17.403 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact docker_exec_cmd] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23\nFriday 21 September 2018 08:26:35 -0400 (0:00:00.072) 0:00:17.476 ****** \nok: [controller-0 -> 192.168.24.18] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : is ceph running already?] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34\nFriday 21 September 2018 08:26:35 -0400 (0:00:00.142) 0:00:17.619 ****** \nok: [controller-0 -> 192.168.24.18] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"-s\", \"-f\", \"json\"], \"delta\": \"0:00:00.025305\", \"end\": \"2018-09-21 12:26:35.714561\", \"failed_when_result\": false, \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2018-09-21 12:26:35.689256\", \"stderr\": \"Error response from daemon: No such container: ceph-mon-controller-0\", \"stderr_lines\": [\"Error response from daemon: No such container: ceph-mon-controller-0\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check if /var/lib/mistral/overcloud/ceph-ansible/fetch_dir directory exists] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47\nFriday 21 September 2018 08:26:35 -0400 (0:00:00.261) 0:00:17.880 ****** \nok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact ceph_current_status rc 1] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57\nFriday 21 September 2018 08:26:35 -0400 (0:00:00.207) 0:00:18.088 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : create a local fetch directory if it does not exist] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64\nFriday 21 September 2018 08:26:36 -0400 (0:00:00.058) 0:00:18.146 ****** \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 6, \"state\": \"directory\", \"uid\": 42430}\n\nTASK [ceph-defaults : set_fact ceph_current_status (convert to json)] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74\nFriday 21 September 2018 08:26:36 -0400 (0:00:00.454) 0:00:18.600 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid from ceph_current_status] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81\nFriday 21 September 2018 08:26:36 -0400 (0:00:00.060) 0:00:18.661 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:88\nFriday 21 September 2018 08:26:36 -0400 (0:00:00.051) 0:00:18.712 ****** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}\n\nTASK [ceph-defaults : generate cluster fsid] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:92\nFriday 21 September 2018 08:26:36 -0400 (0:00:00.083) 0:00:18.795 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:103\nFriday 21 September 2018 08:26:36 -0400 (0:00:00.049) 0:00:18.845 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : read cluster fsid if it already exists] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:112\nFriday 21 September 2018 08:26:36 -0400 (0:00:00.052) 0:00:18.898 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:124\nFriday 21 September 2018 08:26:36 -0400 (0:00:00.056) 0:00:18.954 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:130\nFriday 21 September 2018 08:26:36 -0400 (0:00:00.051) 0:00:19.005 ****** \nok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:136\nFriday 21 September 2018 08:26:36 -0400 (0:00:00.083) 0:00:19.089 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142\nFriday 21 September 2018 08:26:37 -0400 (0:00:00.047) 0:00:19.137 ****** \nok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149\nFriday 21 September 2018 08:26:37 -0400 (0:00:00.090) 0:00:19.227 ****** \nok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156\nFriday 21 September 2018 08:26:37 -0400 (0:00:00.085) 0:00:19.313 ****** \nok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}\n\nTASK [ceph-defaults : resolve device link(s)] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:163\nFriday 21 September 2018 08:26:37 -0400 (0:00:00.087) 0:00:19.401 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:173\nFriday 21 September 2018 08:26:37 -0400 (0:00:00.055) 0:00:19.457 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build final devices list] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:182\nFriday 21 September 2018 08:26:37 -0400 (0:00:00.058) 0:00:19.515 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190\nFriday 21 September 2018 08:26:37 -0400 (0:00:00.053) 0:00:19.568 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197\nFriday 21 September 2018 08:26:37 -0400 (0:00:00.054) 0:00:19.623 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204\nFriday 21 September 2018 08:26:37 -0400 (0:00:00.052) 0:00:19.676 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211\nFriday 21 September 2018 08:26:37 -0400 (0:00:00.054) 0:00:19.730 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:218\nFriday 21 September 2018 08:26:37 -0400 (0:00:00.053) 0:00:19.784 ****** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rgw_hostname - fqdn] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:225\nFriday 21 September 2018 08:26:37 -0400 (0:00:00.184) 0:00:19.969 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rgw_hostname - no fqdn] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:235\nFriday 21 September 2018 08:26:37 -0400 (0:00:00.051) 0:00:20.020 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_directories] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2\nFriday 21 September 2018 08:26:37 -0400 (0:00:00.058) 0:00:20.079 ****** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}\n\nTASK [ceph-defaults : create ceph initial directories] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18\nFriday 21 September 2018 08:26:38 -0400 (0:00:00.183) 0:00:20.262 ****** \nchanged: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-docker-common : fail if systemd is not present] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2\nFriday 21 September 2018 08:26:40 -0400 (0:00:02.176) 0:00:22.438 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2\nFriday 21 September 2018 08:26:40 -0400 (0:00:00.051) 0:00:22.490 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11\nFriday 21 September 2018 08:26:40 -0400 (0:00:00.059) 0:00:22.550 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : warning deprecation for fqdn configuration] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:20\nFriday 21 September 2018 08:26:40 -0400 (0:00:00.049) 0:00:22.600 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove ceph udev rules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2\nFriday 21 September 2018 08:26:40 -0400 (0:00:00.142) 0:00:22.743 ****** \nok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}\nok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14\nFriday 21 September 2018 08:26:41 -0400 (0:00:00.417) 0:00:23.161 ****** \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20\nFriday 21 September 2018 08:26:41 -0400 (0:00:00.091) 0:00:23.252 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get docker version] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26\nFriday 21 September 2018 08:26:41 -0400 (0:00:00.050) 0:00:23.303 ****** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.021767\", \"end\": \"2018-09-21 12:26:41.399107\", \"rc\": 0, \"start\": \"2018-09-21 12:26:41.377340\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 6e3bb8e/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 6e3bb8e/1.13.1\"]}\n\nTASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32\nFriday 21 September 2018 08:26:41 -0400 (0:00:00.269) 0:00:23.572 ****** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}\n\nTASK [ceph-docker-common : check if a cluster is already running] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42\nFriday 21 September 2018 08:26:41 -0400 (0:00:00.102) 0:00:23.675 ****** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.024426\", \"end\": \"2018-09-21 12:26:41.776775\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:26:41.752349\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2\nFriday 21 September 2018 08:26:41 -0400 (0:00:00.263) 0:00:23.939 ****** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13\nFriday 21 September 2018 08:26:41 -0400 (0:00:00.102) 0:00:24.041 ****** \nok: [controller-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20\nFriday 21 September 2018 08:26:42 -0400 (0:00:00.153) 0:00:24.195 ****** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25\nFriday 21 September 2018 08:26:42 -0400 (0:00:00.102) 0:00:24.297 ****** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : stat for ceph config and keys] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30\nFriday 21 September 2018 08:26:42 -0400 (0:00:00.106) 0:00:24.404 ****** \nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}\n\nTASK [ceph-docker-common : fail if we find existing cluster files] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5\nFriday 21 September 2018 08:26:43 -0400 (0:00:01.351) 0:00:25.755 ****** \nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/monmap-ceph'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/monmap-ceph\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mgr.controller-0.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on atomic] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2\nFriday 21 September 2018 08:26:43 -0400 (0:00:00.322) 0:00:26.077 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6\nFriday 21 September 2018 08:26:44 -0400 (0:00:00.046) 0:00:26.124 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on redhat or suse] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2\nFriday 21 September 2018 08:26:44 -0400 (0:00:00.044) 0:00:26.169 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on redhat or suse] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13\nFriday 21 September 2018 08:26:44 -0400 (0:00:00.053) 0:00:26.222 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7\nFriday 21 September 2018 08:26:44 -0400 (0:00:00.056) 0:00:26.279 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on debian] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2\nFriday 21 September 2018 08:26:44 -0400 (0:00:00.053) 0:00:26.332 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on debian] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11\nFriday 21 September 2018 08:26:44 -0400 (0:00:00.047) 0:00:26.379 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7\nFriday 21 September 2018 08:26:44 -0400 (0:00:00.049) 0:00:26.429 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mon container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3\nFriday 21 September 2018 08:26:44 -0400 (0:00:00.047) 0:00:26.477 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph osd container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12\nFriday 21 September 2018 08:26:44 -0400 (0:00:00.054) 0:00:26.532 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mds container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21\nFriday 21 September 2018 08:26:44 -0400 (0:00:00.056) 0:00:26.588 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rgw container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30\nFriday 21 September 2018 08:26:44 -0400 (0:00:00.050) 0:00:26.638 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mgr container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39\nFriday 21 September 2018 08:26:44 -0400 (0:00:00.051) 0:00:26.690 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rbd mirror container] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48\nFriday 21 September 2018 08:26:44 -0400 (0:00:00.060) 0:00:26.750 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph nfs container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57\nFriday 21 September 2018 08:26:44 -0400 (0:00:00.052) 0:00:26.803 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67\nFriday 21 September 2018 08:26:44 -0400 (0:00:00.059) 0:00:26.863 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76\nFriday 21 September 2018 08:26:44 -0400 (0:00:00.059) 0:00:26.922 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85\nFriday 21 September 2018 08:26:44 -0400 (0:00:00.055) 0:00:26.977 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94\nFriday 21 September 2018 08:26:44 -0400 (0:00:00.055) 0:00:27.033 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103\nFriday 21 September 2018 08:26:44 -0400 (0:00:00.058) 0:00:27.091 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112\nFriday 21 September 2018 08:26:45 -0400 (0:00:00.056) 0:00:27.148 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121\nFriday 21 September 2018 08:26:45 -0400 (0:00:00.059) 0:00:27.207 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130\nFriday 21 September 2018 08:26:45 -0400 (0:00:00.051) 0:00:27.259 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137\nFriday 21 September 2018 08:26:45 -0400 (0:00:00.059) 0:00:27.318 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144\nFriday 21 September 2018 08:26:45 -0400 (0:00:00.054) 0:00:27.373 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151\nFriday 21 September 2018 08:26:45 -0400 (0:00:00.052) 0:00:27.425 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158\nFriday 21 September 2018 08:26:45 -0400 (0:00:00.056) 0:00:27.482 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165\nFriday 21 September 2018 08:26:45 -0400 (0:00:00.056) 0:00:27.538 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172\nFriday 21 September 2018 08:26:45 -0400 (0:00:00.050) 0:00:27.589 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-12 image] ********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179\nFriday 21 September 2018 08:26:45 -0400 (0:00:00.053) 0:00:27.643 ****** \nok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:13.974754\", \"end\": \"2018-09-21 12:26:59.708263\", \"rc\": 0, \"start\": \"2018-09-21 12:26:45.733509\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-12: Pulling from 192.168.24.1:8787/rhceph\\n428a9ca37f0e: Pulling fs layer\\n8115a58d83bd: Pulling fs layer\\n5e409f26eefe: Pulling fs layer\\n8115a58d83bd: Verifying Checksum\\n8115a58d83bd: Download complete\\n428a9ca37f0e: Verifying Checksum\\n428a9ca37f0e: Download complete\\n5e409f26eefe: Verifying Checksum\\n5e409f26eefe: Download complete\\n428a9ca37f0e: Pull complete\\n8115a58d83bd: Pull complete\\n5e409f26eefe: Pull complete\\nDigest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-12: Pulling from 192.168.24.1:8787/rhceph\", \"428a9ca37f0e: Pulling fs layer\", \"8115a58d83bd: Pulling fs layer\", \"5e409f26eefe: Pulling fs layer\", \"8115a58d83bd: Verifying Checksum\", \"8115a58d83bd: Download complete\", \"428a9ca37f0e: Verifying Checksum\", \"428a9ca37f0e: Download complete\", \"5e409f26eefe: Verifying Checksum\", \"5e409f26eefe: Download complete\", \"428a9ca37f0e: Pull complete\", \"8115a58d83bd: Pull complete\", \"5e409f26eefe: Pull complete\", \"Digest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\"]}\n\nTASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-12 image after pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189\nFriday 21 September 2018 08:26:59 -0400 (0:00:14.231) 0:00:41.875 ****** \nchanged: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:00.027714\", \"end\": \"2018-09-21 12:27:00.071933\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:27:00.044219\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-12\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 592066185,\\n \\\"VirtualSize\\\": 592066185,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/72c93a89fa782b05dc2585f4d22d029cb845c66e38924914071ba886d94bed8c/diff:/var/lib/docker/overlay2/1c6053643a9c6bc0506bbea8ee537d1f921ebdc802eafc5cf82c4566e0c5bbd4/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\\n \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\\n \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-12\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 592066185,\", \" \\\"VirtualSize\\\": 592066185,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/72c93a89fa782b05dc2585f4d22d029cb845c66e38924914071ba886d94bed8c/diff:/var/lib/docker/overlay2/1c6053643a9c6bc0506bbea8ee537d1f921ebdc802eafc5cf82c4566e0c5bbd4/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\", \" \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\", \" \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194\nFriday 21 September 2018 08:27:00 -0400 (0:00:00.367) 0:00:42.242 ****** \nok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200\nFriday 21 September 2018 08:27:00 -0400 (0:00:00.075) 0:00:42.318 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211\nFriday 21 September 2018 08:27:00 -0400 (0:00:00.049) 0:00:42.367 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222\nFriday 21 September 2018 08:27:00 -0400 (0:00:00.045) 0:00:42.412 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233\nFriday 21 September 2018 08:27:00 -0400 (0:00:00.043) 0:00:42.456 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244\nFriday 21 September 2018 08:27:00 -0400 (0:00:00.045) 0:00:42.501 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255\nFriday 21 September 2018 08:27:00 -0400 (0:00:00.052) 0:00:42.554 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266\nFriday 21 September 2018 08:27:00 -0400 (0:00:00.045) 0:00:42.599 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : export local ceph dev image] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277\nFriday 21 September 2018 08:27:00 -0400 (0:00:00.124) 0:00:42.724 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : copy ceph dev image file] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285\nFriday 21 September 2018 08:27:00 -0400 (0:00:00.046) 0:00:42.770 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : load ceph dev image] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292\nFriday 21 September 2018 08:27:00 -0400 (0:00:00.049) 0:00:42.819 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove tmp ceph dev image file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297\nFriday 21 September 2018 08:27:00 -0400 (0:00:00.048) 0:00:42.867 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get ceph version] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84\nFriday 21 September 2018 08:27:00 -0400 (0:00:00.047) 0:00:42.915 ****** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-12\", \"--version\"], \"delta\": \"0:00:00.458745\", \"end\": \"2018-09-21 12:27:01.448737\", \"rc\": 0, \"start\": \"2018-09-21 12:27:00.989992\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\"]}\n\nTASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90\nFriday 21 September 2018 08:27:01 -0400 (0:00:00.699) 0:00:43.614 ****** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-42.el7cp\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release jewel] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2\nFriday 21 September 2018 08:27:01 -0400 (0:00:00.077) 0:00:43.692 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release kraken] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8\nFriday 21 September 2018 08:27:01 -0400 (0:00:00.049) 0:00:43.741 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release luminous] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14\nFriday 21 September 2018 08:27:01 -0400 (0:00:00.049) 0:00:43.790 ****** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release mimic] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20\nFriday 21 September 2018 08:27:01 -0400 (0:00:00.082) 0:00:43.873 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release nautilus] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26\nFriday 21 September 2018 08:27:01 -0400 (0:00:00.058) 0:00:43.931 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : create bootstrap directories] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2\nFriday 21 September 2018 08:27:01 -0400 (0:00:00.050) 0:00:43.981 ****** \nchanged: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\n\nTASK [ceph-config : create ceph conf directory] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4\nFriday 21 September 2018 08:27:02 -0400 (0:00:00.875) 0:00:44.857 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate ceph configuration file: ceph.conf] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12\nFriday 21 September 2018 08:27:02 -0400 (0:00:00.052) 0:00:44.909 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : create a local fetch directory if it does not exist] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38\nFriday 21 September 2018 08:27:02 -0400 (0:00:00.053) 0:00:44.963 ****** \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 6, \"state\": \"directory\", \"uid\": 42430}\n\nTASK [ceph-config : generate cluster uuid] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54\nFriday 21 September 2018 08:27:03 -0400 (0:00:00.198) 0:00:45.161 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : read cluster uuid if it already exists] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64\nFriday 21 September 2018 08:27:03 -0400 (0:00:00.055) 0:00:45.216 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nFriday 21 September 2018 08:27:03 -0400 (0:00:00.047) 0:00:45.264 ****** \nchanged: [controller-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nFriday 21 September 2018 08:27:03 -0400 (0:00:00.246) 0:00:45.510 ****** \nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mon restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy osd restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mds restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy rgw restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mgr restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for controller-0\nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"57e5c5d755a630f2e4e9c6766a186478cc210a6a\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"3d1c4a58fc488cca7c5fd19c6454272e\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 1103, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532823.45-141081049048416/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-config : set fsid fact when generate_fsid = true] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102\nFriday 21 September 2018 08:27:05 -0400 (0:00:02.432) 0:00:47.942 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact docker_exec_cmd] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:2\nFriday 21 September 2018 08:27:05 -0400 (0:00:00.052) 0:00:47.994 ****** \nok: [controller-0] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-mon : make sure monitor_interface or monitor_address or monitor_address_block is configured] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml:2\nFriday 21 September 2018 08:27:06 -0400 (0:00:00.196) 0:00:48.190 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : generate monitor initial keyring] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:2\nFriday 21 September 2018 08:27:06 -0400 (0:00:00.063) 0:00:48.253 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : read monitor initial keyring if it already exists] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:11\nFriday 21 September 2018 08:27:06 -0400 (0:00:00.055) 0:00:48.308 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create monitor initial keyring] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:22\nFriday 21 September 2018 08:27:06 -0400 (0:00:00.051) 0:00:48.360 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set initial monitor key permissions] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:34\nFriday 21 September 2018 08:27:06 -0400 (0:00:00.049) 0:00:48.410 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create (and fix ownership of) monitor directory] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:42\nFriday 21 September 2018 08:27:06 -0400 (0:00:00.049) 0:00:48.459 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact client_admin_ceph_authtool_cap >= ceph_release_num.luminous] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:51\nFriday 21 September 2018 08:27:06 -0400 (0:00:00.046) 0:00:48.505 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact client_admin_ceph_authtool_cap < ceph_release_num.luminous] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:63\nFriday 21 September 2018 08:27:06 -0400 (0:00:00.131) 0:00:48.637 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create custom admin keyring] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:74\nFriday 21 September 2018 08:27:06 -0400 (0:00:00.052) 0:00:48.689 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set ownership of admin keyring] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:88\nFriday 21 September 2018 08:27:06 -0400 (0:00:00.049) 0:00:48.738 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : import admin keyring into mon keyring] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:99\nFriday 21 September 2018 08:27:06 -0400 (0:00:00.051) 0:00:48.790 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ceph monitor mkfs with keyring] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:106\nFriday 21 September 2018 08:27:06 -0400 (0:00:00.051) 0:00:48.841 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ceph monitor mkfs without keyring] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:113\nFriday 21 September 2018 08:27:06 -0400 (0:00:00.050) 0:00:48.891 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ensure systemd service override directory exists] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:2\nFriday 21 September 2018 08:27:06 -0400 (0:00:00.061) 0:00:48.953 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : add ceph-mon systemd service overrides] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:10\nFriday 21 September 2018 08:27:06 -0400 (0:00:00.051) 0:00:49.005 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : start the monitor service] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:20\nFriday 21 September 2018 08:27:06 -0400 (0:00:00.052) 0:00:49.057 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : enable the ceph-mon.target service] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:29\nFriday 21 September 2018 08:27:06 -0400 (0:00:00.050) 0:00:49.108 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : include ceph_keys.yml] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:19\nFriday 21 September 2018 08:27:07 -0400 (0:00:00.051) 0:00:49.159 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : collect all the pools] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:2\nFriday 21 September 2018 08:27:07 -0400 (0:00:00.053) 0:00:49.213 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : secure the cluster] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:7\nFriday 21 September 2018 08:27:07 -0400 (0:00:00.054) 0:00:49.268 ****** \n\nTASK [ceph-mon : set_fact ceph_config_keys] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:2\nFriday 21 September 2018 08:27:07 -0400 (0:00:00.058) 0:00:49.326 ****** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-mon : register rbd bootstrap key] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:11\nFriday 21 September 2018 08:27:07 -0400 (0:00:00.091) 0:00:49.418 ****** \nok: [controller-0] => {\"ansible_facts\": {\"bootstrap_rbd_keyring\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-mon : merge rbd bootstrap key to config and keys paths] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:17\nFriday 21 September 2018 08:27:07 -0400 (0:00:00.098) 0:00:49.516 ****** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-mon : stat for ceph config and keys] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:22\nFriday 21 September 2018 08:27:07 -0400 (0:00:00.097) 0:00:49.614 ****** \nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}\n\nTASK [ceph-mon : try to copy ceph keys] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:33\nFriday 21 September 2018 08:27:08 -0400 (0:00:00.979) 0:00:50.593 ****** \nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : populate kv_store with default ceph.conf] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:2\nFriday 21 September 2018 08:27:08 -0400 (0:00:00.174) 0:00:50.767 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : populate kv_store with custom ceph.conf] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:18\nFriday 21 September 2018 08:27:08 -0400 (0:00:00.070) 0:00:50.838 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : delete populate-kv-store docker] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:36\nFriday 21 September 2018 08:27:08 -0400 (0:00:00.083) 0:00:50.921 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : generate systemd unit file] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:43\nFriday 21 September 2018 08:27:08 -0400 (0:00:00.054) 0:00:50.976 ****** \nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"b0ff5a5b5db5ad0a93c7412c072d8f645da2f45c\", \"dest\": \"/etc/systemd/system/ceph-mon@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"f0817dd50b4c8f886584edd030bb3021\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 887, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532828.92-103322074387440/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-mon : systemd start mon container] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:54\nFriday 21 September 2018 08:27:09 -0400 (0:00:00.953) 0:00:51.930 ****** \nchanged: [controller-0] => {\"changed\": true, \"enabled\": true, \"name\": \"ceph-mon@controller-0\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"systemd-journald.socket basic.target system-ceph\\\\x5cx2dmon.slice docker.service\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph Monitor\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker run --rm --name ceph-mon-%i --memory=3g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro --net=host -e IP_VERSION=4 -e MON_IP=172.17.3.16 -e CLUSTER=ceph -e FSID=8fedf068-bd95-11e8-ba69-5254006eda59 -e CEPH_PUBLIC_NETWORK=172.17.3.0/24 -e CEPH_DAEMON=MON 192.168.24.1:8787/rhceph:3-12 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStopPost\": \"{ path=/bin/rm ; argv[]=/bin/rm -f /var/run/ceph/ceph-mon.controller-0.asok ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-mon@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-mon@controller-0.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"127798\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"127798\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-mon@controller-0.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dmon.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dmon.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\n\nTASK [ceph-mon : configure ceph profile.d aliases] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml:2\nFriday 21 September 2018 08:27:10 -0400 (0:00:00.706) 0:00:52.636 ****** \nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"78965c7dfcde4827c1cb8645bc7a444472e87718\", \"dest\": \"/etc/profile.d/ceph-aliases.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"66a9bfe5c26a22ade3c67cc7c7a58d2c\", \"mode\": \"0755\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:bin_t:s0\", \"size\": 375, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532830.57-111847234573165/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-mon : wait for monitor socket to exist] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:12\nFriday 21 September 2018 08:27:11 -0400 (0:00:00.535) 0:00:53.172 ****** \nFAILED - RETRYING: wait for monitor socket to exist (5 retries left).\nchanged: [controller-0] => {\"attempts\": 2, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"sh\", \"-c\", \"stat /var/run/ceph/ceph-mon.controller-0.asok || stat /var/run/ceph/ceph-mon.controller-0.localdomain.asok\"], \"delta\": \"0:00:00.083032\", \"end\": \"2018-09-21 12:27:26.587724\", \"rc\": 0, \"start\": \"2018-09-21 12:27:26.504692\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \" File: '/var/run/ceph/ceph-mon.controller-0.asok'\\n Size: 0 \\tBlocks: 0 IO Block: 4096 socket\\nDevice: 14h/20d\\tInode: 382696 Links: 1\\nAccess: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\\nAccess: 2018-09-21 12:27:11.506181792 +0000\\nModify: 2018-09-21 12:27:11.506181792 +0000\\nChange: 2018-09-21 12:27:11.506181792 +0000\\n Birth: -\", \"stdout_lines\": [\" File: '/var/run/ceph/ceph-mon.controller-0.asok'\", \" Size: 0 \\tBlocks: 0 IO Block: 4096 socket\", \"Device: 14h/20d\\tInode: 382696 Links: 1\", \"Access: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\", \"Access: 2018-09-21 12:27:11.506181792 +0000\", \"Modify: 2018-09-21 12:27:11.506181792 +0000\", \"Change: 2018-09-21 12:27:11.506181792 +0000\", \" Birth: -\"]}\n\nTASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:19\nFriday 21 September 2018 08:27:26 -0400 (0:00:15.581) 0:01:08.753 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:29\nFriday 21 September 2018 08:27:26 -0400 (0:00:00.099) 0:01:08.852 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:39\nFriday 21 September 2018 08:27:26 -0400 (0:00:00.095) 0:01:08.948 ****** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--admin-daemon\", \"/var/run/ceph/ceph-mon.controller-0.asok\", \"add_bootstrap_peer_hint\", \"172.17.3.16\"], \"delta\": \"0:00:00.173006\", \"end\": \"2018-09-21 12:27:27.397223\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:27:27.224217\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"mon already active; ignoring bootstrap hint\", \"stdout_lines\": [\"mon already active; ignoring bootstrap hint\"]}\n\nTASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:49\nFriday 21 September 2018 08:27:27 -0400 (0:00:00.609) 0:01:09.558 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:59\nFriday 21 September 2018 08:27:27 -0400 (0:00:00.055) 0:01:09.613 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:69\nFriday 21 September 2018 08:27:27 -0400 (0:00:00.058) 0:01:09.672 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : push ceph files to the ansible server] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml:2\nFriday 21 September 2018 08:27:27 -0400 (0:00:00.055) 0:01:09.727 ****** \nchanged: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": true, \"checksum\": \"9e373fe5b7239c71b2c20b1e9dda563cef508b10\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59/etc/ceph/ceph.client.admin.keyring\", \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"9d6426f968161a2e99954092fe0fea79\", \"remote_checksum\": \"9e373fe5b7239c71b2c20b1e9dda563cef508b10\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": true, \"checksum\": \"71985a44f030d17c775335c42962737bc688e6a0\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59/etc/ceph/ceph.mon.keyring\", \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"a5f024b9cde0ed26e54e699e93f2bf63\", \"remote_checksum\": \"71985a44f030d17c775335c42962737bc688e6a0\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"64333848b27ab8d9f98e1749b646f53ce8491e92\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"d0dcfd5572ae39eb0ce251488182ec1b\", \"remote_checksum\": \"64333848b27ab8d9f98e1749b646f53ce8491e92\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"ad253570a945c870140d7f94eccef76f44861e59\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"8c235791382cb359fb6d7d3577b15f8c\", \"remote_checksum\": \"ad253570a945c870140d7f94eccef76f44861e59\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"40b83591ce4be64f55769e0a0d8aca12db95c281\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"316046afda2f2cbb417dd97b099d7be1\", \"remote_checksum\": \"40b83591ce4be64f55769e0a0d8aca12db95c281\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"cf7920e30e8d8566b8b9f935a5f741908c23465e\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"babd454ca6e67b272f3dbad355f1a18d\", \"remote_checksum\": \"cf7920e30e8d8566b8b9f935a5f741908c23465e\", \"remote_md5sum\": null}\n\nTASK [ceph-mon : create ceph rest api keyring when mon is containerized] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:84\nFriday 21 September 2018 08:27:28 -0400 (0:00:01.366) 0:01:11.094 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create ceph mgr keyring(s) when mon is containerized] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:97\nFriday 21 September 2018 08:27:29 -0400 (0:00:00.054) 0:01:11.149 ****** \nok: [controller-0] => (item=controller-0) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"get-or-create\", \"mgr.controller-0\", \"mon\", \"allow profile mgr\", \"osd\", \"allow *\", \"mds\", \"allow *\", \"-o\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"], \"delta\": \"0:00:00.400066\", \"end\": \"2018-09-21 12:27:29.853798\", \"item\": \"controller-0\", \"rc\": 0, \"start\": \"2018-09-21 12:27:29.453732\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-mon : stat for ceph mgr key(s)] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:109\nFriday 21 September 2018 08:27:29 -0400 (0:00:00.865) 0:01:12.014 ****** \nok: [controller-0] => (item=controller-0) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"controller-0\", \"stat\": {\"atime\": 1537532849.7104473, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"us-ascii\", \"checksum\": \"f02fcb991c5a53a3bf474c15b6a514c8356b9c69\", \"ctime\": 1537532849.832449, \"dev\": 64514, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 0, \"gr_name\": \"root\", \"inode\": 50508107, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"text/plain\", \"mode\": \"0644\", \"mtime\": 1537532849.832449, \"nlink\": 1, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"root\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 0, \"version\": \"1817761372\", \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\n\nTASK [ceph-mon : fetch ceph mgr key(s)] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:121\nFriday 21 September 2018 08:27:30 -0400 (0:00:00.394) 0:01:12.409 ****** \nchanged: [controller-0] => (item={'_ansible_parsed': True, u'stat': {u'charset': u'us-ascii', u'uid': 0, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1537532849.832449, u'block_size': 4096, u'inode': 50508107, u'isgid': False, u'size': 67, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': u'1817761372', u'pw_name': u'root', u'gid': 0, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'root', u'path': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'xusr': False, u'atime': 1537532849.7104473, u'mimetype': u'text/plain', u'ctime': 1537532849.832449, u'isblk': False, u'checksum': u'f02fcb991c5a53a3bf474c15b6a514c8356b9c69', u'dev': 64514, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, 'failed': False, u'changed': False, 'item': u'controller-0', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'controller-0'}) => {\"changed\": true, \"checksum\": \"f02fcb991c5a53a3bf474c15b6a514c8356b9c69\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59/etc/ceph/ceph.mgr.controller-0.keyring\", \"item\": {\"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"controller-0\", \"stat\": {\"atime\": 1537532849.7104473, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"us-ascii\", \"checksum\": \"f02fcb991c5a53a3bf474c15b6a514c8356b9c69\", \"ctime\": 1537532849.832449, \"dev\": 64514, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 0, \"gr_name\": \"root\", \"inode\": 50508107, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"text/plain\", \"mode\": \"0644\", \"mtime\": 1537532849.832449, \"nlink\": 1, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"root\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 0, \"version\": \"1817761372\", \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}, \"md5sum\": \"d7ba913d6ab2c770a0269d55efc01b88\", \"remote_checksum\": \"f02fcb991c5a53a3bf474c15b6a514c8356b9c69\", \"remote_md5sum\": null}\n\nTASK [ceph-mon : configure crush hierarchy] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:2\nFriday 21 September 2018 08:27:30 -0400 (0:00:00.410) 0:01:12.819 ****** \nskipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create configured crush rules] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:14\nFriday 21 September 2018 08:27:30 -0400 (0:00:00.060) 0:01:12.880 ****** \nskipping: [controller-0] => (item={u'default': False, u'root': u'HDD', u'type': u'host', u'name': u'HDD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item={u'default': False, u'root': u'SSD', u'type': u'host', u'name': u'SSD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : get id for new default crush rule] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:21\nFriday 21 September 2018 08:27:30 -0400 (0:00:00.064) 0:01:12.945 ****** \nskipping: [controller-0] => (item={u'default': False, u'root': u'HDD', u'type': u'host', u'name': u'HDD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item={u'default': False, u'root': u'SSD', u'type': u'host', u'name': u'SSD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact info_ceph_default_crush_rule_yaml] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:33\nFriday 21 September 2018 08:27:30 -0400 (0:00:00.067) 0:01:13.013 ****** \nskipping: [controller-0] => (item={'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {u'default': False, u'type': u'host', u'root': u'HDD', u'name': u'HDD'}, 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': {u'default': False, u'type': u'host', u'root': u'HDD', u'name': u'HDD'}}) => {\"changed\": false, \"item\": {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\", \"skipped\": true}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item={'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {u'default': False, u'type': u'host', u'root': u'SSD', u'name': u'SSD'}, 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': {u'default': False, u'type': u'host', u'root': u'SSD', u'name': u'SSD'}}) => {\"changed\": false, \"item\": {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\", \"skipped\": true}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact osd_pool_default_crush_rule to osd_pool_default_crush_replicated_ruleset if release < luminous else osd_pool_default_crush_rule] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:41\nFriday 21 September 2018 08:27:30 -0400 (0:00:00.066) 0:01:13.079 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : insert new default crush rule into daemon to prevent restart] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:45\nFriday 21 September 2018 08:27:31 -0400 (0:00:00.079) 0:01:13.158 ****** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : add new default crush rule to ceph.conf] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:54\nFriday 21 September 2018 08:27:31 -0400 (0:00:00.168) 0:01:13.327 ****** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : get default value for osd_pool_default_pg_num] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:5\nFriday 21 September 2018 08:27:31 -0400 (0:00:00.054) 0:01:13.382 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact osd_pool_default_pg_num with pool_default_pg_num (backward compatibility)] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:16\nFriday 21 September 2018 08:27:31 -0400 (0:00:00.052) 0:01:13.434 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact osd_pool_default_pg_num with default_pool_default_pg_num.stdout] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:21\nFriday 21 September 2018 08:27:31 -0400 (0:00:00.048) 0:01:13.483 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact osd_pool_default_pg_num ceph_conf_overrides.global.osd_pool_default_pg_num] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:27\nFriday 21 September 2018 08:27:31 -0400 (0:00:00.044) 0:01:13.527 ****** \nok: [controller-0] => {\"ansible_facts\": {\"osd_pool_default_pg_num\": \"32\"}, \"changed\": false}\n\nTASK [ceph-mon : test if calamari-server is installed] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:2\nFriday 21 September 2018 08:27:31 -0400 (0:00:00.080) 0:01:13.607 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : increase calamari logging level when debug is on] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:18\nFriday 21 September 2018 08:27:31 -0400 (0:00:00.055) 0:01:13.663 ****** \nskipping: [controller-0] => (item=cthulhu) => {\"changed\": false, \"item\": \"cthulhu\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=calamari_web) => {\"changed\": false, \"item\": \"calamari_web\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : initialize the calamari server api] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:29\nFriday 21 September 2018 08:27:31 -0400 (0:00:00.053) 0:01:13.716 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******\nFriday 21 September 2018 08:27:31 -0400 (0:00:00.017) 0:01:13.734 ****** \nok: [controller-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mon restart script] **********************\nFriday 21 September 2018 08:27:31 -0400 (0:00:00.073) 0:01:13.808 ****** \nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"83f7af8323e264039a95f266faedb4a665c8f4ca\", \"dest\": \"/tmp/restart_mon_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"a72fe8d7f7ff92960aa2e96a1b3fe152\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_tmp_t:s0\", \"size\": 1398, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532851.77-51911260257588/source\", \"state\": \"file\", \"uid\": 0}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***\nFriday 21 September 2018 08:27:32 -0400 (0:00:00.544) 0:01:14.352 ****** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******\nFriday 21 September 2018 08:27:32 -0400 (0:00:00.094) 0:01:14.446 ****** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********\nFriday 21 September 2018 08:27:32 -0400 (0:00:00.135) 0:01:14.582 ****** \nok: [controller-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******\nFriday 21 September 2018 08:27:32 -0400 (0:00:00.076) 0:01:14.658 ****** \nok: [controller-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy osd restart script] **********************\nFriday 21 September 2018 08:27:32 -0400 (0:00:00.079) 0:01:14.737 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***\nFriday 21 September 2018 08:27:32 -0400 (0:00:00.052) 0:01:14.789 ****** \nskipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******\nFriday 21 September 2018 08:27:32 -0400 (0:00:00.088) 0:01:14.878 ****** \nskipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********\nFriday 21 September 2018 08:27:32 -0400 (0:00:00.089) 0:01:14.967 ****** \nok: [controller-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******\nFriday 21 September 2018 08:27:32 -0400 (0:00:00.072) 0:01:15.040 ****** \nok: [controller-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mds restart script] **********************\nFriday 21 September 2018 08:27:33 -0400 (0:00:00.074) 0:01:15.115 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***\nFriday 21 September 2018 08:27:33 -0400 (0:00:00.049) 0:01:15.165 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******\nFriday 21 September 2018 08:27:33 -0400 (0:00:00.058) 0:01:15.223 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********\nFriday 21 September 2018 08:27:33 -0400 (0:00:00.057) 0:01:15.281 ****** \nok: [controller-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******\nFriday 21 September 2018 08:27:33 -0400 (0:00:00.073) 0:01:15.354 ****** \nok: [controller-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************\nFriday 21 September 2018 08:27:33 -0400 (0:00:00.078) 0:01:15.432 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***\nFriday 21 September 2018 08:27:33 -0400 (0:00:00.054) 0:01:15.487 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******\nFriday 21 September 2018 08:27:33 -0400 (0:00:00.064) 0:01:15.551 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********\nFriday 21 September 2018 08:27:33 -0400 (0:00:00.062) 0:01:15.613 ****** \nok: [controller-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***\nFriday 21 September 2018 08:27:33 -0400 (0:00:00.081) 0:01:15.695 ****** \nok: [controller-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************\nFriday 21 September 2018 08:27:33 -0400 (0:00:00.083) 0:01:15.778 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***\nFriday 21 September 2018 08:27:33 -0400 (0:00:00.052) 0:01:15.831 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***\nFriday 21 September 2018 08:27:33 -0400 (0:00:00.068) 0:01:15.899 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***\nFriday 21 September 2018 08:27:33 -0400 (0:00:00.064) 0:01:15.964 ****** \nok: [controller-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******\nFriday 21 September 2018 08:27:33 -0400 (0:00:00.079) 0:01:16.044 ****** \nok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************\nFriday 21 September 2018 08:27:34 -0400 (0:00:00.082) 0:01:16.126 ****** \nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"3b92c07facdbaa789b36f850d92d7444e2bb6a27\", \"dest\": \"/tmp/restart_mgr_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"73c8d33ad2b3c95d77ee4b411e06cae6\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_tmp_t:s0\", \"size\": 843, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532854.1-182050924307964/source\", \"state\": \"file\", \"uid\": 0}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***\nFriday 21 September 2018 08:27:34 -0400 (0:00:00.532) 0:01:16.659 ****** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******\nFriday 21 September 2018 08:27:34 -0400 (0:00:00.100) 0:01:16.760 ****** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********\nFriday 21 September 2018 08:27:34 -0400 (0:00:00.149) 0:01:16.910 ****** \nok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nPLAY [mons] ********************************************************************\nMETA: ran handlers\n\nTASK [set ceph monitor install 'Complete'] *************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:99\nFriday 21 September 2018 08:27:34 -0400 (0:00:00.121) 0:01:17.031 ****** \nok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"end\": \"20180921082734Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nPLAY [mgrs] ********************************************************************\n\nTASK [set ceph manager install 'In Progress'] **********************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:111\nFriday 21 September 2018 08:27:35 -0400 (0:00:00.174) 0:01:17.206 ****** \nok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mgr\": {\"start\": \"20180921082735Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [ceph-defaults : check for a mon container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2\nFriday 21 September 2018 08:27:35 -0400 (0:00:00.094) 0:01:17.300 ****** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.025051\", \"end\": \"2018-09-21 12:27:35.404077\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:27:35.379026\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"509b79aaec28\", \"stdout_lines\": [\"509b79aaec28\"]}\n\nTASK [ceph-defaults : check for an osd container] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11\nFriday 21 September 2018 08:27:35 -0400 (0:00:00.265) 0:01:17.566 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mds container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20\nFriday 21 September 2018 08:27:35 -0400 (0:00:00.053) 0:01:17.619 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rgw container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29\nFriday 21 September 2018 08:27:35 -0400 (0:00:00.052) 0:01:17.672 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mgr container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38\nFriday 21 September 2018 08:27:35 -0400 (0:00:00.053) 0:01:17.726 ****** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.023200\", \"end\": \"2018-09-21 12:27:35.820044\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:27:35.796844\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for a rbd mirror container] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47\nFriday 21 September 2018 08:27:35 -0400 (0:00:00.254) 0:01:17.980 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a nfs container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56\nFriday 21 September 2018 08:27:35 -0400 (0:00:00.054) 0:01:18.035 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mon socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2\nFriday 21 September 2018 08:27:35 -0400 (0:00:00.050) 0:01:18.086 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mon socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11\nFriday 21 September 2018 08:27:36 -0400 (0:00:00.050) 0:01:18.137 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21\nFriday 21 September 2018 08:27:36 -0400 (0:00:00.052) 0:01:18.190 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph osd socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30\nFriday 21 September 2018 08:27:36 -0400 (0:00:00.050) 0:01:18.240 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph osd socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40\nFriday 21 September 2018 08:27:36 -0400 (0:00:00.135) 0:01:18.376 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50\nFriday 21 September 2018 08:27:36 -0400 (0:00:00.050) 0:01:18.427 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mds socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59\nFriday 21 September 2018 08:27:36 -0400 (0:00:00.048) 0:01:18.475 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mds socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69\nFriday 21 September 2018 08:27:36 -0400 (0:00:00.049) 0:01:18.524 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79\nFriday 21 September 2018 08:27:36 -0400 (0:00:00.047) 0:01:18.572 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rgw socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88\nFriday 21 September 2018 08:27:36 -0400 (0:00:00.051) 0:01:18.623 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98\nFriday 21 September 2018 08:27:36 -0400 (0:00:00.062) 0:01:18.685 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108\nFriday 21 September 2018 08:27:36 -0400 (0:00:00.050) 0:01:18.736 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mgr socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117\nFriday 21 September 2018 08:27:36 -0400 (0:00:00.049) 0:01:18.785 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127\nFriday 21 September 2018 08:27:36 -0400 (0:00:00.051) 0:01:18.837 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137\nFriday 21 September 2018 08:27:36 -0400 (0:00:00.047) 0:01:18.884 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rbd mirror socket] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146\nFriday 21 September 2018 08:27:36 -0400 (0:00:00.050) 0:01:18.935 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156\nFriday 21 September 2018 08:27:36 -0400 (0:00:00.052) 0:01:18.987 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166\nFriday 21 September 2018 08:27:36 -0400 (0:00:00.050) 0:01:19.037 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175\nFriday 21 September 2018 08:27:36 -0400 (0:00:00.052) 0:01:19.090 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184\nFriday 21 September 2018 08:27:37 -0400 (0:00:00.048) 0:01:19.139 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194\nFriday 21 September 2018 08:27:37 -0400 (0:00:00.048) 0:01:19.187 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if it is atomic host] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2\nFriday 21 September 2018 08:27:37 -0400 (0:00:00.060) 0:01:19.248 ****** \nok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact is_atomic] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7\nFriday 21 September 2018 08:27:37 -0400 (0:00:00.230) 0:01:19.479 ****** \nok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11\nFriday 21 September 2018 08:27:37 -0400 (0:00:00.083) 0:01:19.562 ****** \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17\nFriday 21 September 2018 08:27:37 -0400 (0:00:00.088) 0:01:19.650 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact docker_exec_cmd] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23\nFriday 21 September 2018 08:27:37 -0400 (0:00:00.082) 0:01:19.732 ****** \nok: [controller-0 -> 192.168.24.18] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : is ceph running already?] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34\nFriday 21 September 2018 08:27:37 -0400 (0:00:00.162) 0:01:19.894 ****** \nok: [controller-0 -> 192.168.24.18] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"-s\", \"-f\", \"json\"], \"delta\": \"0:00:00.318683\", \"end\": \"2018-09-21 12:27:38.306011\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:27:37.987328\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\\n{\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"modified\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"created\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.16:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.16:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":1,\\\"num_osds\\\":0,\\\"num_up_osds\\\":0,\\\"num_in_osds\\\":0,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[],\\\"num_pgs\\\":0,\\\"num_pools\\\":0,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":0,\\\"bytes_avail\\\":0,\\\"bytes_total\\\":0},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":1,\\\"active_gid\\\":0,\\\"active_name\\\":\\\"\\\",\\\"active_addr\\\":\\\"-\\\",\\\"available\\\":false,\\\"standbys\\\":[],\\\"modules\\\":[\\\"balancer\\\",\\\"restful\\\",\\\"status\\\"],\\\"available_modules\\\":[],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\", \"stdout_lines\": [\"\", \"{\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"modified\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"created\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.16:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.16:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":1,\\\"num_osds\\\":0,\\\"num_up_osds\\\":0,\\\"num_in_osds\\\":0,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[],\\\"num_pgs\\\":0,\\\"num_pools\\\":0,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":0,\\\"bytes_avail\\\":0,\\\"bytes_total\\\":0},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":1,\\\"active_gid\\\":0,\\\"active_name\\\":\\\"\\\",\\\"active_addr\\\":\\\"-\\\",\\\"available\\\":false,\\\"standbys\\\":[],\\\"modules\\\":[\\\"balancer\\\",\\\"restful\\\",\\\"status\\\"],\\\"available_modules\\\":[],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\"]}\n\nTASK [ceph-defaults : check if /var/lib/mistral/overcloud/ceph-ansible/fetch_dir directory exists] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47\nFriday 21 September 2018 08:27:38 -0400 (0:00:00.579) 0:01:20.474 ****** \nok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact ceph_current_status rc 1] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57\nFriday 21 September 2018 08:27:38 -0400 (0:00:00.198) 0:01:20.673 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : create a local fetch directory if it does not exist] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64\nFriday 21 September 2018 08:27:38 -0400 (0:00:00.056) 0:01:20.729 ****** \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 50, \"state\": \"directory\", \"uid\": 42430}\n\nTASK [ceph-defaults : set_fact ceph_current_status (convert to json)] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74\nFriday 21 September 2018 08:27:38 -0400 (0:00:00.203) 0:01:20.933 ****** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_current_status\": {\"election_epoch\": 3, \"fsid\": \"8fedf068-bd95-11e8-ba69-5254006eda59\", \"fsmap\": {\"by_rank\": [], \"epoch\": 1}, \"health\": {\"checks\": {}, \"overall_status\": \"HEALTH_WARN\", \"status\": \"HEALTH_OK\", \"summary\": [{\"severity\": \"HEALTH_WARN\", \"summary\": \"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\"}]}, \"mgrmap\": {\"active_addr\": \"-\", \"active_gid\": 0, \"active_name\": \"\", \"available\": false, \"available_modules\": [], \"epoch\": 1, \"modules\": [\"balancer\", \"restful\", \"status\"], \"services\": {}, \"standbys\": []}, \"monmap\": {\"created\": \"2018-09-21 12:27:11.445099\", \"epoch\": 1, \"features\": {\"optional\": [], \"persistent\": [\"kraken\", \"luminous\"]}, \"fsid\": \"8fedf068-bd95-11e8-ba69-5254006eda59\", \"modified\": \"2018-09-21 12:27:11.445099\", \"mons\": [{\"addr\": \"172.17.3.16:6789/0\", \"name\": \"controller-0\", \"public_addr\": \"172.17.3.16:6789/0\", \"rank\": 0}]}, \"osdmap\": {\"osdmap\": {\"epoch\": 1, \"full\": false, \"nearfull\": false, \"num_in_osds\": 0, \"num_osds\": 0, \"num_remapped_pgs\": 0, \"num_up_osds\": 0}}, \"pgmap\": {\"bytes_avail\": 0, \"bytes_total\": 0, \"bytes_used\": 0, \"data_bytes\": 0, \"num_objects\": 0, \"num_pgs\": 0, \"num_pools\": 0, \"pgs_by_state\": []}, \"quorum\": [0], \"quorum_names\": [\"controller-0\"], \"servicemap\": {\"epoch\": 1, \"modified\": \"0.000000\", \"services\": {}}}}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact fsid from ceph_current_status] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81\nFriday 21 September 2018 08:27:38 -0400 (0:00:00.097) 0:01:21.030 ****** \nok: [controller-0] => {\"ansible_facts\": {\"fsid\": \"8fedf068-bd95-11e8-ba69-5254006eda59\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:88\nFriday 21 September 2018 08:27:39 -0400 (0:00:00.093) 0:01:21.124 ****** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}\n\nTASK [ceph-defaults : generate cluster fsid] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:92\nFriday 21 September 2018 08:27:39 -0400 (0:00:00.098) 0:01:21.223 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:103\nFriday 21 September 2018 08:27:39 -0400 (0:00:00.057) 0:01:21.281 ****** \nchanged: [controller-0 -> localhost] => {\"changed\": true, \"cmd\": \"echo 8fedf068-bd95-11e8-ba69-5254006eda59 | tee /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"delta\": \"0:00:00.644982\", \"end\": \"2018-09-21 08:27:39.969836\", \"rc\": 0, \"start\": \"2018-09-21 08:27:39.324854\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"8fedf068-bd95-11e8-ba69-5254006eda59\", \"stdout_lines\": [\"8fedf068-bd95-11e8-ba69-5254006eda59\"]}\n\nTASK [ceph-defaults : read cluster fsid if it already exists] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:112\nFriday 21 September 2018 08:27:40 -0400 (0:00:00.849) 0:01:22.130 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:124\nFriday 21 September 2018 08:27:40 -0400 (0:00:00.061) 0:01:22.192 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:130\nFriday 21 September 2018 08:27:40 -0400 (0:00:00.047) 0:01:22.240 ****** \nok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:136\nFriday 21 September 2018 08:27:40 -0400 (0:00:00.078) 0:01:22.319 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142\nFriday 21 September 2018 08:27:40 -0400 (0:00:00.046) 0:01:22.366 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149\nFriday 21 September 2018 08:27:40 -0400 (0:00:00.047) 0:01:22.413 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156\nFriday 21 September 2018 08:27:40 -0400 (0:00:00.052) 0:01:22.466 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : resolve device link(s)] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:163\nFriday 21 September 2018 08:27:40 -0400 (0:00:00.053) 0:01:22.519 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:173\nFriday 21 September 2018 08:27:40 -0400 (0:00:00.052) 0:01:22.572 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build final devices list] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:182\nFriday 21 September 2018 08:27:40 -0400 (0:00:00.051) 0:01:22.623 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190\nFriday 21 September 2018 08:27:40 -0400 (0:00:00.051) 0:01:22.675 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197\nFriday 21 September 2018 08:27:40 -0400 (0:00:00.054) 0:01:22.730 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204\nFriday 21 September 2018 08:27:40 -0400 (0:00:00.069) 0:01:22.799 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211\nFriday 21 September 2018 08:27:40 -0400 (0:00:00.056) 0:01:22.856 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:218\nFriday 21 September 2018 08:27:40 -0400 (0:00:00.056) 0:01:22.913 ****** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rgw_hostname - fqdn] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:225\nFriday 21 September 2018 08:27:40 -0400 (0:00:00.087) 0:01:23.000 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rgw_hostname - no fqdn] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:235\nFriday 21 September 2018 08:27:40 -0400 (0:00:00.053) 0:01:23.054 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_directories] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2\nFriday 21 September 2018 08:27:40 -0400 (0:00:00.058) 0:01:23.112 ****** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}\n\nTASK [ceph-defaults : create ceph initial directories] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18\nFriday 21 September 2018 08:27:41 -0400 (0:00:00.185) 0:01:23.298 ****** \nok: [controller-0] => (item=/etc/ceph) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 160, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/mon) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 31, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/osd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/mds) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 31, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/tmp) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 28, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 35, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/run/ceph) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 60, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-docker-common : fail if systemd is not present] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2\nFriday 21 September 2018 08:27:43 -0400 (0:00:02.199) 0:01:25.497 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2\nFriday 21 September 2018 08:27:43 -0400 (0:00:00.052) 0:01:25.549 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11\nFriday 21 September 2018 08:27:43 -0400 (0:00:00.061) 0:01:25.611 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : warning deprecation for fqdn configuration] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:20\nFriday 21 September 2018 08:27:43 -0400 (0:00:00.049) 0:01:25.660 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove ceph udev rules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2\nFriday 21 September 2018 08:27:43 -0400 (0:00:00.048) 0:01:25.709 ****** \nok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}\nok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14\nFriday 21 September 2018 08:27:44 -0400 (0:00:00.526) 0:01:26.236 ****** \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20\nFriday 21 September 2018 08:27:44 -0400 (0:00:00.085) 0:01:26.321 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get docker version] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26\nFriday 21 September 2018 08:27:44 -0400 (0:00:00.057) 0:01:26.379 ****** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.025049\", \"end\": \"2018-09-21 12:27:44.481276\", \"rc\": 0, \"start\": \"2018-09-21 12:27:44.456227\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 6e3bb8e/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 6e3bb8e/1.13.1\"]}\n\nTASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32\nFriday 21 September 2018 08:27:44 -0400 (0:00:00.262) 0:01:26.641 ****** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}\n\nTASK [ceph-docker-common : check if a cluster is already running] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42\nFriday 21 September 2018 08:27:44 -0400 (0:00:00.087) 0:01:26.728 ****** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.026405\", \"end\": \"2018-09-21 12:27:44.827938\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:27:44.801533\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"509b79aaec28\", \"stdout_lines\": [\"509b79aaec28\"]}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2\nFriday 21 September 2018 08:27:44 -0400 (0:00:00.257) 0:01:26.985 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13\nFriday 21 September 2018 08:27:44 -0400 (0:00:00.056) 0:01:27.042 ****** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20\nFriday 21 September 2018 08:27:44 -0400 (0:00:00.068) 0:01:27.110 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25\nFriday 21 September 2018 08:27:45 -0400 (0:00:00.058) 0:01:27.168 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : stat for ceph config and keys] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30\nFriday 21 September 2018 08:27:45 -0400 (0:00:00.073) 0:01:27.242 ****** \nskipping: [controller-0] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : fail if we find existing cluster files] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5\nFriday 21 September 2018 08:27:45 -0400 (0:00:00.136) 0:01:27.379 ****** \nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on atomic] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2\nFriday 21 September 2018 08:27:45 -0400 (0:00:00.148) 0:01:27.528 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6\nFriday 21 September 2018 08:27:45 -0400 (0:00:00.048) 0:01:27.577 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on redhat or suse] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2\nFriday 21 September 2018 08:27:45 -0400 (0:00:00.048) 0:01:27.625 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on redhat or suse] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13\nFriday 21 September 2018 08:27:45 -0400 (0:00:00.064) 0:01:27.689 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7\nFriday 21 September 2018 08:27:45 -0400 (0:00:00.057) 0:01:27.746 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on debian] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2\nFriday 21 September 2018 08:27:45 -0400 (0:00:00.055) 0:01:27.802 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on debian] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11\nFriday 21 September 2018 08:27:45 -0400 (0:00:00.049) 0:01:27.852 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7\nFriday 21 September 2018 08:27:45 -0400 (0:00:00.050) 0:01:27.902 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mon container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3\nFriday 21 September 2018 08:27:45 -0400 (0:00:00.052) 0:01:27.955 ****** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"inspect\", \"509b79aaec28\"], \"delta\": \"0:00:00.023674\", \"end\": \"2018-09-21 12:27:46.066714\", \"rc\": 0, \"start\": \"2018-09-21 12:27:46.043040\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"509b79aaec283c68b35bed0f1d02686fa714f6bcce88661dadbdd26eed12504d\\\",\\n \\\"Created\\\": \\\"2018-09-21T12:27:10.49665363Z\\\",\\n \\\"Path\\\": \\\"/entrypoint.sh\\\",\\n \\\"Args\\\": [],\\n \\\"State\\\": {\\n \\\"Status\\\": \\\"running\\\",\\n \\\"Running\\\": true,\\n \\\"Paused\\\": false,\\n \\\"Restarting\\\": false,\\n \\\"OOMKilled\\\": false,\\n \\\"Dead\\\": false,\\n \\\"Pid\\\": 44943,\\n \\\"ExitCode\\\": 0,\\n \\\"Error\\\": \\\"\\\",\\n \\\"StartedAt\\\": \\\"2018-09-21T12:27:10.65506497Z\\\",\\n \\\"FinishedAt\\\": \\\"0001-01-01T00:00:00Z\\\"\\n },\\n \\\"Image\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"ResolvConfPath\\\": \\\"/var/lib/docker/containers/509b79aaec283c68b35bed0f1d02686fa714f6bcce88661dadbdd26eed12504d/resolv.conf\\\",\\n \\\"HostnamePath\\\": \\\"/var/lib/docker/containers/509b79aaec283c68b35bed0f1d02686fa714f6bcce88661dadbdd26eed12504d/hostname\\\",\\n \\\"HostsPath\\\": \\\"/var/lib/docker/containers/509b79aaec283c68b35bed0f1d02686fa714f6bcce88661dadbdd26eed12504d/hosts\\\",\\n \\\"LogPath\\\": \\\"\\\",\\n \\\"Name\\\": \\\"/ceph-mon-controller-0\\\",\\n \\\"RestartCount\\\": 0,\\n \\\"Driver\\\": \\\"overlay2\\\",\\n \\\"MountLabel\\\": \\\"\\\",\\n \\\"ProcessLabel\\\": \\\"\\\",\\n \\\"AppArmorProfile\\\": \\\"\\\",\\n \\\"ExecIDs\\\": null,\\n \\\"HostConfig\\\": {\\n \\\"Binds\\\": [\\n \\\"/var/lib/ceph:/var/lib/ceph:z\\\",\\n \\\"/etc/ceph:/etc/ceph:z\\\",\\n \\\"/var/run/ceph:/var/run/ceph:z\\\",\\n \\\"/etc/localtime:/etc/localtime:ro\\\"\\n ],\\n \\\"ContainerIDFile\\\": \\\"\\\",\\n \\\"LogConfig\\\": {\\n \\\"Type\\\": \\\"journald\\\",\\n \\\"Config\\\": {}\\n },\\n \\\"NetworkMode\\\": \\\"host\\\",\\n \\\"PortBindings\\\": {},\\n \\\"RestartPolicy\\\": {\\n \\\"Name\\\": \\\"no\\\",\\n \\\"MaximumRetryCount\\\": 0\\n },\\n \\\"AutoRemove\\\": true,\\n \\\"VolumeDriver\\\": \\\"\\\",\\n \\\"VolumesFrom\\\": null,\\n \\\"CapAdd\\\": null,\\n \\\"CapDrop\\\": null,\\n \\\"Dns\\\": [],\\n \\\"DnsOptions\\\": [],\\n \\\"DnsSearch\\\": [],\\n \\\"ExtraHosts\\\": null,\\n \\\"GroupAdd\\\": null,\\n \\\"IpcMode\\\": \\\"\\\",\\n \\\"Cgroup\\\": \\\"\\\",\\n \\\"Links\\\": null,\\n \\\"OomScoreAdj\\\": 0,\\n \\\"PidMode\\\": \\\"\\\",\\n \\\"Privileged\\\": false,\\n \\\"PublishAllPorts\\\": false,\\n \\\"ReadonlyRootfs\\\": false,\\n \\\"SecurityOpt\\\": null,\\n \\\"UTSMode\\\": \\\"\\\",\\n \\\"UsernsMode\\\": \\\"\\\",\\n \\\"ShmSize\\\": 67108864,\\n \\\"Runtime\\\": \\\"docker-runc\\\",\\n \\\"ConsoleSize\\\": [\\n 0,\\n 0\\n ],\\n \\\"Isolation\\\": \\\"\\\",\\n \\\"CpuShares\\\": 0,\\n \\\"Memory\\\": 3221225472,\\n \\\"NanoCpus\\\": 0,\\n \\\"CgroupParent\\\": \\\"\\\",\\n \\\"BlkioWeight\\\": 0,\\n \\\"BlkioWeightDevice\\\": null,\\n \\\"BlkioDeviceReadBps\\\": null,\\n \\\"BlkioDeviceWriteBps\\\": null,\\n \\\"BlkioDeviceReadIOps\\\": null,\\n \\\"BlkioDeviceWriteIOps\\\": null,\\n \\\"CpuPeriod\\\": 0,\\n \\\"CpuQuota\\\": 100000,\\n \\\"CpuRealtimePeriod\\\": 0,\\n \\\"CpuRealtimeRuntime\\\": 0,\\n \\\"CpusetCpus\\\": \\\"\\\",\\n \\\"CpusetMems\\\": \\\"\\\",\\n \\\"Devices\\\": [],\\n \\\"DiskQuota\\\": 0,\\n \\\"KernelMemory\\\": 0,\\n \\\"MemoryReservation\\\": 0,\\n \\\"MemorySwap\\\": 6442450944,\\n \\\"MemorySwappiness\\\": -1,\\n \\\"OomKillDisable\\\": false,\\n \\\"PidsLimit\\\": 0,\\n \\\"Ulimits\\\": null,\\n \\\"CpuCount\\\": 0,\\n \\\"CpuPercent\\\": 0,\\n \\\"IOMaximumIOps\\\": 0,\\n \\\"IOMaximumBandwidth\\\": 0\\n },\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/f75ba8cf619ccb639511052485b7795aa6ccc2ee3261bfe40f3db250e2bc173a-init/diff:/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/diff:/var/lib/docker/overlay2/72c93a89fa782b05dc2585f4d22d029cb845c66e38924914071ba886d94bed8c/diff:/var/lib/docker/overlay2/1c6053643a9c6bc0506bbea8ee537d1f921ebdc802eafc5cf82c4566e0c5bbd4/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/f75ba8cf619ccb639511052485b7795aa6ccc2ee3261bfe40f3db250e2bc173a/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/f75ba8cf619ccb639511052485b7795aa6ccc2ee3261bfe40f3db250e2bc173a/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/f75ba8cf619ccb639511052485b7795aa6ccc2ee3261bfe40f3db250e2bc173a/work\\\"\\n }\\n },\\n \\\"Mounts\\\": [\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/var/lib/ceph\\\",\\n \\\"Destination\\\": \\\"/var/lib/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/etc/ceph\\\",\\n \\\"Destination\\\": \\\"/etc/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/var/run/ceph\\\",\\n \\\"Destination\\\": \\\"/var/run/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/etc/localtime\\\",\\n \\\"Destination\\\": \\\"/etc/localtime\\\",\\n \\\"Mode\\\": \\\"ro\\\",\\n \\\"RW\\\": false,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n }\\n ],\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"controller-0\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": true,\\n \\\"AttachStderr\\\": true,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"IP_VERSION=4\\\",\\n \\\"MON_IP=172.17.3.16\\\",\\n \\\"CLUSTER=ceph\\\",\\n \\\"FSID=8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\n \\\"CEPH_PUBLIC_NETWORK=172.17.3.0/24\\\",\\n \\\"CEPH_DAEMON=MON\\\",\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"192.168.24.1:8787/rhceph:3-12\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": null,\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"NetworkSettings\\\": {\\n \\\"Bridge\\\": \\\"\\\",\\n \\\"SandboxID\\\": \\\"b6f9d3e5ebf8f9e507a2edf513fec904d52c3ce4a9dc538930cf89c905ec467c\\\",\\n \\\"HairpinMode\\\": false,\\n \\\"LinkLocalIPv6Address\\\": \\\"\\\",\\n \\\"LinkLocalIPv6PrefixLen\\\": 0,\\n \\\"Ports\\\": {},\\n \\\"SandboxKey\\\": \\\"/var/run/docker/netns/default\\\",\\n \\\"SecondaryIPAddresses\\\": null,\\n \\\"SecondaryIPv6Addresses\\\": null,\\n \\\"EndpointID\\\": \\\"\\\",\\n \\\"Gateway\\\": \\\"\\\",\\n \\\"GlobalIPv6Address\\\": \\\"\\\",\\n \\\"GlobalIPv6PrefixLen\\\": 0,\\n \\\"IPAddress\\\": \\\"\\\",\\n \\\"IPPrefixLen\\\": 0,\\n \\\"IPv6Gateway\\\": \\\"\\\",\\n \\\"MacAddress\\\": \\\"\\\",\\n \\\"Networks\\\": {\\n \\\"host\\\": {\\n \\\"IPAMConfig\\\": null,\\n \\\"Links\\\": null,\\n \\\"Aliases\\\": null,\\n \\\"NetworkID\\\": \\\"cf6351ff3e7cd6c1bb62a77ff0c2ac7bebe8ea9f0d1c52e85ce35ff40982ff70\\\",\\n \\\"EndpointID\\\": \\\"218ed1d48845d795fcd4591f871de6fec3cd8468a7e5188e3eb509498b3a2407\\\",\\n \\\"Gateway\\\": \\\"\\\",\\n \\\"IPAddress\\\": \\\"\\\",\\n \\\"IPPrefixLen\\\": 0,\\n \\\"IPv6Gateway\\\": \\\"\\\",\\n \\\"GlobalIPv6Address\\\": \\\"\\\",\\n \\\"GlobalIPv6PrefixLen\\\": 0,\\n \\\"MacAddress\\\": \\\"\\\"\\n }\\n }\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"509b79aaec283c68b35bed0f1d02686fa714f6bcce88661dadbdd26eed12504d\\\",\", \" \\\"Created\\\": \\\"2018-09-21T12:27:10.49665363Z\\\",\", \" \\\"Path\\\": \\\"/entrypoint.sh\\\",\", \" \\\"Args\\\": [],\", \" \\\"State\\\": {\", \" \\\"Status\\\": \\\"running\\\",\", \" \\\"Running\\\": true,\", \" \\\"Paused\\\": false,\", \" \\\"Restarting\\\": false,\", \" \\\"OOMKilled\\\": false,\", \" \\\"Dead\\\": false,\", \" \\\"Pid\\\": 44943,\", \" \\\"ExitCode\\\": 0,\", \" \\\"Error\\\": \\\"\\\",\", \" \\\"StartedAt\\\": \\\"2018-09-21T12:27:10.65506497Z\\\",\", \" \\\"FinishedAt\\\": \\\"0001-01-01T00:00:00Z\\\"\", \" },\", \" \\\"Image\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"ResolvConfPath\\\": \\\"/var/lib/docker/containers/509b79aaec283c68b35bed0f1d02686fa714f6bcce88661dadbdd26eed12504d/resolv.conf\\\",\", \" \\\"HostnamePath\\\": \\\"/var/lib/docker/containers/509b79aaec283c68b35bed0f1d02686fa714f6bcce88661dadbdd26eed12504d/hostname\\\",\", \" \\\"HostsPath\\\": \\\"/var/lib/docker/containers/509b79aaec283c68b35bed0f1d02686fa714f6bcce88661dadbdd26eed12504d/hosts\\\",\", \" \\\"LogPath\\\": \\\"\\\",\", \" \\\"Name\\\": \\\"/ceph-mon-controller-0\\\",\", \" \\\"RestartCount\\\": 0,\", \" \\\"Driver\\\": \\\"overlay2\\\",\", \" \\\"MountLabel\\\": \\\"\\\",\", \" \\\"ProcessLabel\\\": \\\"\\\",\", \" \\\"AppArmorProfile\\\": \\\"\\\",\", \" \\\"ExecIDs\\\": null,\", \" \\\"HostConfig\\\": {\", \" \\\"Binds\\\": [\", \" \\\"/var/lib/ceph:/var/lib/ceph:z\\\",\", \" \\\"/etc/ceph:/etc/ceph:z\\\",\", \" \\\"/var/run/ceph:/var/run/ceph:z\\\",\", \" \\\"/etc/localtime:/etc/localtime:ro\\\"\", \" ],\", \" \\\"ContainerIDFile\\\": \\\"\\\",\", \" \\\"LogConfig\\\": {\", \" \\\"Type\\\": \\\"journald\\\",\", \" \\\"Config\\\": {}\", \" },\", \" \\\"NetworkMode\\\": \\\"host\\\",\", \" \\\"PortBindings\\\": {},\", \" \\\"RestartPolicy\\\": {\", \" \\\"Name\\\": \\\"no\\\",\", \" \\\"MaximumRetryCount\\\": 0\", \" },\", \" \\\"AutoRemove\\\": true,\", \" \\\"VolumeDriver\\\": \\\"\\\",\", \" \\\"VolumesFrom\\\": null,\", \" \\\"CapAdd\\\": null,\", \" \\\"CapDrop\\\": null,\", \" \\\"Dns\\\": [],\", \" \\\"DnsOptions\\\": [],\", \" \\\"DnsSearch\\\": [],\", \" \\\"ExtraHosts\\\": null,\", \" \\\"GroupAdd\\\": null,\", \" \\\"IpcMode\\\": \\\"\\\",\", \" \\\"Cgroup\\\": \\\"\\\",\", \" \\\"Links\\\": null,\", \" \\\"OomScoreAdj\\\": 0,\", \" \\\"PidMode\\\": \\\"\\\",\", \" \\\"Privileged\\\": false,\", \" \\\"PublishAllPorts\\\": false,\", \" \\\"ReadonlyRootfs\\\": false,\", \" \\\"SecurityOpt\\\": null,\", \" \\\"UTSMode\\\": \\\"\\\",\", \" \\\"UsernsMode\\\": \\\"\\\",\", \" \\\"ShmSize\\\": 67108864,\", \" \\\"Runtime\\\": \\\"docker-runc\\\",\", \" \\\"ConsoleSize\\\": [\", \" 0,\", \" 0\", \" ],\", \" \\\"Isolation\\\": \\\"\\\",\", \" \\\"CpuShares\\\": 0,\", \" \\\"Memory\\\": 3221225472,\", \" \\\"NanoCpus\\\": 0,\", \" \\\"CgroupParent\\\": \\\"\\\",\", \" \\\"BlkioWeight\\\": 0,\", \" \\\"BlkioWeightDevice\\\": null,\", \" \\\"BlkioDeviceReadBps\\\": null,\", \" \\\"BlkioDeviceWriteBps\\\": null,\", \" \\\"BlkioDeviceReadIOps\\\": null,\", \" \\\"BlkioDeviceWriteIOps\\\": null,\", \" \\\"CpuPeriod\\\": 0,\", \" \\\"CpuQuota\\\": 100000,\", \" \\\"CpuRealtimePeriod\\\": 0,\", \" \\\"CpuRealtimeRuntime\\\": 0,\", \" \\\"CpusetCpus\\\": \\\"\\\",\", \" \\\"CpusetMems\\\": \\\"\\\",\", \" \\\"Devices\\\": [],\", \" \\\"DiskQuota\\\": 0,\", \" \\\"KernelMemory\\\": 0,\", \" \\\"MemoryReservation\\\": 0,\", \" \\\"MemorySwap\\\": 6442450944,\", \" \\\"MemorySwappiness\\\": -1,\", \" \\\"OomKillDisable\\\": false,\", \" \\\"PidsLimit\\\": 0,\", \" \\\"Ulimits\\\": null,\", \" \\\"CpuCount\\\": 0,\", \" \\\"CpuPercent\\\": 0,\", \" \\\"IOMaximumIOps\\\": 0,\", \" \\\"IOMaximumBandwidth\\\": 0\", \" },\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/f75ba8cf619ccb639511052485b7795aa6ccc2ee3261bfe40f3db250e2bc173a-init/diff:/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/diff:/var/lib/docker/overlay2/72c93a89fa782b05dc2585f4d22d029cb845c66e38924914071ba886d94bed8c/diff:/var/lib/docker/overlay2/1c6053643a9c6bc0506bbea8ee537d1f921ebdc802eafc5cf82c4566e0c5bbd4/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/f75ba8cf619ccb639511052485b7795aa6ccc2ee3261bfe40f3db250e2bc173a/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/f75ba8cf619ccb639511052485b7795aa6ccc2ee3261bfe40f3db250e2bc173a/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/f75ba8cf619ccb639511052485b7795aa6ccc2ee3261bfe40f3db250e2bc173a/work\\\"\", \" }\", \" },\", \" \\\"Mounts\\\": [\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/var/lib/ceph\\\",\", \" \\\"Destination\\\": \\\"/var/lib/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/etc/ceph\\\",\", \" \\\"Destination\\\": \\\"/etc/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/var/run/ceph\\\",\", \" \\\"Destination\\\": \\\"/var/run/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/etc/localtime\\\",\", \" \\\"Destination\\\": \\\"/etc/localtime\\\",\", \" \\\"Mode\\\": \\\"ro\\\",\", \" \\\"RW\\\": false,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" }\", \" ],\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"controller-0\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": true,\", \" \\\"AttachStderr\\\": true,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"IP_VERSION=4\\\",\", \" \\\"MON_IP=172.17.3.16\\\",\", \" \\\"CLUSTER=ceph\\\",\", \" \\\"FSID=8fedf068-bd95-11e8-ba69-5254006eda59\\\",\", \" \\\"CEPH_PUBLIC_NETWORK=172.17.3.0/24\\\",\", \" \\\"CEPH_DAEMON=MON\\\",\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"192.168.24.1:8787/rhceph:3-12\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": null,\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"NetworkSettings\\\": {\", \" \\\"Bridge\\\": \\\"\\\",\", \" \\\"SandboxID\\\": \\\"b6f9d3e5ebf8f9e507a2edf513fec904d52c3ce4a9dc538930cf89c905ec467c\\\",\", \" \\\"HairpinMode\\\": false,\", \" \\\"LinkLocalIPv6Address\\\": \\\"\\\",\", \" \\\"LinkLocalIPv6PrefixLen\\\": 0,\", \" \\\"Ports\\\": {},\", \" \\\"SandboxKey\\\": \\\"/var/run/docker/netns/default\\\",\", \" \\\"SecondaryIPAddresses\\\": null,\", \" \\\"SecondaryIPv6Addresses\\\": null,\", \" \\\"EndpointID\\\": \\\"\\\",\", \" \\\"Gateway\\\": \\\"\\\",\", \" \\\"GlobalIPv6Address\\\": \\\"\\\",\", \" \\\"GlobalIPv6PrefixLen\\\": 0,\", \" \\\"IPAddress\\\": \\\"\\\",\", \" \\\"IPPrefixLen\\\": 0,\", \" \\\"IPv6Gateway\\\": \\\"\\\",\", \" \\\"MacAddress\\\": \\\"\\\",\", \" \\\"Networks\\\": {\", \" \\\"host\\\": {\", \" \\\"IPAMConfig\\\": null,\", \" \\\"Links\\\": null,\", \" \\\"Aliases\\\": null,\", \" \\\"NetworkID\\\": \\\"cf6351ff3e7cd6c1bb62a77ff0c2ac7bebe8ea9f0d1c52e85ce35ff40982ff70\\\",\", \" \\\"EndpointID\\\": \\\"218ed1d48845d795fcd4591f871de6fec3cd8468a7e5188e3eb509498b3a2407\\\",\", \" \\\"Gateway\\\": \\\"\\\",\", \" \\\"IPAddress\\\": \\\"\\\",\", \" \\\"IPPrefixLen\\\": 0,\", \" \\\"IPv6Gateway\\\": \\\"\\\",\", \" \\\"GlobalIPv6Address\\\": \\\"\\\",\", \" \\\"GlobalIPv6PrefixLen\\\": 0,\", \" \\\"MacAddress\\\": \\\"\\\"\", \" }\", \" }\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : inspect ceph osd container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12\nFriday 21 September 2018 08:27:46 -0400 (0:00:00.287) 0:01:28.243 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mds container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21\nFriday 21 September 2018 08:27:46 -0400 (0:00:00.050) 0:01:28.293 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rgw container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30\nFriday 21 September 2018 08:27:46 -0400 (0:00:00.051) 0:01:28.344 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mgr container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39\nFriday 21 September 2018 08:27:46 -0400 (0:00:00.048) 0:01:28.393 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rbd mirror container] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48\nFriday 21 September 2018 08:27:46 -0400 (0:00:00.069) 0:01:28.463 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph nfs container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57\nFriday 21 September 2018 08:27:46 -0400 (0:00:00.052) 0:01:28.515 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67\nFriday 21 September 2018 08:27:46 -0400 (0:00:00.049) 0:01:28.565 ****** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"inspect\", \"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\"], \"delta\": \"0:00:00.026086\", \"end\": \"2018-09-21 12:27:46.683263\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:27:46.657177\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-12\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 592066185,\\n \\\"VirtualSize\\\": 592066185,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/72c93a89fa782b05dc2585f4d22d029cb845c66e38924914071ba886d94bed8c/diff:/var/lib/docker/overlay2/1c6053643a9c6bc0506bbea8ee537d1f921ebdc802eafc5cf82c4566e0c5bbd4/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\\n \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\\n \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-12\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 592066185,\", \" \\\"VirtualSize\\\": 592066185,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/72c93a89fa782b05dc2585f4d22d029cb845c66e38924914071ba886d94bed8c/diff:/var/lib/docker/overlay2/1c6053643a9c6bc0506bbea8ee537d1f921ebdc802eafc5cf82c4566e0c5bbd4/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\", \" \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\", \" \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76\nFriday 21 September 2018 08:27:46 -0400 (0:00:00.291) 0:01:28.857 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85\nFriday 21 September 2018 08:27:46 -0400 (0:00:00.048) 0:01:28.905 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94\nFriday 21 September 2018 08:27:46 -0400 (0:00:00.057) 0:01:28.962 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103\nFriday 21 September 2018 08:27:46 -0400 (0:00:00.053) 0:01:29.016 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112\nFriday 21 September 2018 08:27:46 -0400 (0:00:00.059) 0:01:29.076 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121\nFriday 21 September 2018 08:27:47 -0400 (0:00:00.057) 0:01:29.133 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130\nFriday 21 September 2018 08:27:47 -0400 (0:00:00.053) 0:01:29.186 ****** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_mon_image_repodigest_before_pulling\": \"sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137\nFriday 21 September 2018 08:27:47 -0400 (0:00:00.097) 0:01:29.284 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144\nFriday 21 September 2018 08:27:47 -0400 (0:00:00.052) 0:01:29.337 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151\nFriday 21 September 2018 08:27:47 -0400 (0:00:00.057) 0:01:29.394 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158\nFriday 21 September 2018 08:27:47 -0400 (0:00:00.056) 0:01:29.451 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165\nFriday 21 September 2018 08:27:47 -0400 (0:00:00.062) 0:01:29.513 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172\nFriday 21 September 2018 08:27:47 -0400 (0:00:00.056) 0:01:29.569 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-12 image] ********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179\nFriday 21 September 2018 08:27:47 -0400 (0:00:00.060) 0:01:29.630 ****** \nok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:00.033331\", \"end\": \"2018-09-21 12:27:47.757630\", \"rc\": 0, \"start\": \"2018-09-21 12:27:47.724299\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-12: Pulling from 192.168.24.1:8787/rhceph\\nDigest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\nStatus: Image is up to date for 192.168.24.1:8787/rhceph:3-12\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-12: Pulling from 192.168.24.1:8787/rhceph\", \"Digest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\", \"Status: Image is up to date for 192.168.24.1:8787/rhceph:3-12\"]}\n\nTASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-12 image after pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189\nFriday 21 September 2018 08:27:47 -0400 (0:00:00.295) 0:01:29.925 ****** \nchanged: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:00.026204\", \"end\": \"2018-09-21 12:27:48.027683\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:27:48.001479\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-12\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 592066185,\\n \\\"VirtualSize\\\": 592066185,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/72c93a89fa782b05dc2585f4d22d029cb845c66e38924914071ba886d94bed8c/diff:/var/lib/docker/overlay2/1c6053643a9c6bc0506bbea8ee537d1f921ebdc802eafc5cf82c4566e0c5bbd4/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\\n \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\\n \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-12\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 592066185,\", \" \\\"VirtualSize\\\": 592066185,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/72c93a89fa782b05dc2585f4d22d029cb845c66e38924914071ba886d94bed8c/diff:/var/lib/docker/overlay2/1c6053643a9c6bc0506bbea8ee537d1f921ebdc802eafc5cf82c4566e0c5bbd4/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\", \" \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\", \" \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194\nFriday 21 September 2018 08:27:48 -0400 (0:00:00.277) 0:01:30.203 ****** \nok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200\nFriday 21 September 2018 08:27:48 -0400 (0:00:00.084) 0:01:30.288 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211\nFriday 21 September 2018 08:27:48 -0400 (0:00:00.061) 0:01:30.349 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222\nFriday 21 September 2018 08:27:48 -0400 (0:00:00.050) 0:01:30.400 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233\nFriday 21 September 2018 08:27:48 -0400 (0:00:00.049) 0:01:30.449 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244\nFriday 21 September 2018 08:27:48 -0400 (0:00:00.053) 0:01:30.503 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255\nFriday 21 September 2018 08:27:48 -0400 (0:00:00.057) 0:01:30.561 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266\nFriday 21 September 2018 08:27:48 -0400 (0:00:00.052) 0:01:30.613 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : export local ceph dev image] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277\nFriday 21 September 2018 08:27:48 -0400 (0:00:00.069) 0:01:30.683 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : copy ceph dev image file] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285\nFriday 21 September 2018 08:27:48 -0400 (0:00:00.053) 0:01:30.736 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : load ceph dev image] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292\nFriday 21 September 2018 08:27:48 -0400 (0:00:00.050) 0:01:30.787 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove tmp ceph dev image file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297\nFriday 21 September 2018 08:27:48 -0400 (0:00:00.050) 0:01:30.837 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get ceph version] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84\nFriday 21 September 2018 08:27:48 -0400 (0:00:00.050) 0:01:30.888 ****** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-12\", \"--version\"], \"delta\": \"0:00:00.458297\", \"end\": \"2018-09-21 12:27:49.529405\", \"rc\": 0, \"start\": \"2018-09-21 12:27:49.071108\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\"]}\n\nTASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90\nFriday 21 September 2018 08:27:49 -0400 (0:00:00.798) 0:01:31.686 ****** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-42.el7cp\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release jewel] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2\nFriday 21 September 2018 08:27:49 -0400 (0:00:00.191) 0:01:31.878 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release kraken] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8\nFriday 21 September 2018 08:27:49 -0400 (0:00:00.059) 0:01:31.938 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release luminous] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14\nFriday 21 September 2018 08:27:49 -0400 (0:00:00.052) 0:01:31.991 ****** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release mimic] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20\nFriday 21 September 2018 08:27:50 -0400 (0:00:00.257) 0:01:32.248 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release nautilus] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26\nFriday 21 September 2018 08:27:50 -0400 (0:00:00.050) 0:01:32.299 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : create bootstrap directories] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2\nFriday 21 September 2018 08:27:50 -0400 (0:00:00.047) 0:01:32.346 ****** \nchanged: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}\n\nTASK [ceph-config : create ceph conf directory] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4\nFriday 21 September 2018 08:27:51 -0400 (0:00:00.921) 0:01:33.268 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate ceph configuration file: ceph.conf] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12\nFriday 21 September 2018 08:27:51 -0400 (0:00:00.062) 0:01:33.331 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : create a local fetch directory if it does not exist] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38\nFriday 21 September 2018 08:27:51 -0400 (0:00:00.057) 0:01:33.388 ****** \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 80, \"state\": \"directory\", \"uid\": 42430}\n\nTASK [ceph-config : generate cluster uuid] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54\nFriday 21 September 2018 08:27:51 -0400 (0:00:00.213) 0:01:33.602 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : read cluster uuid if it already exists] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64\nFriday 21 September 2018 08:27:51 -0400 (0:00:00.059) 0:01:33.662 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nFriday 21 September 2018 08:27:51 -0400 (0:00:00.056) 0:01:33.718 ****** \nchanged: [controller-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nFriday 21 September 2018 08:27:51 -0400 (0:00:00.251) 0:01:33.970 ****** \nok: [controller-0] => {\"changed\": false, \"checksum\": \"57e5c5d755a630f2e4e9c6766a186478cc210a6a\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"3d1c4a58fc488cca7c5fd19c6454272e\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 1103, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532871.92-238172610611786/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-config : set fsid fact when generate_fsid = true] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102\nFriday 21 September 2018 08:27:52 -0400 (0:00:00.593) 0:01:34.564 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : set_fact docker_exec_cmd] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:2\nFriday 21 September 2018 08:27:52 -0400 (0:00:00.056) 0:01:34.620 ****** \nok: [controller-0] => {\"ansible_facts\": {\"docker_exec_cmd_mgr\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-mgr : create mgr directory] *****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:2\nFriday 21 September 2018 08:27:52 -0400 (0:00:00.129) 0:01:34.750 ****** \nok: [controller-0] => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mgr/ceph-controller-0\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-mgr : copy ceph keyring(s) if needed] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:10\nFriday 21 September 2018 08:27:52 -0400 (0:00:00.254) 0:01:35.004 ****** \nchanged: [controller-0] => (item={u'dest': u'/var/lib/ceph/mgr/ceph-controller-0/keyring', u'name': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'copy_key': True}) => {\"changed\": true, \"checksum\": \"f02fcb991c5a53a3bf474c15b6a514c8356b9c69\", \"dest\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"copy_key\": true, \"dest\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"name\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"md5sum\": \"d7ba913d6ab2c770a0269d55efc01b88\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 67, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532872.95-74123356958813/source\", \"state\": \"file\", \"uid\": 167}\nskipping: [controller-0] => (item={u'dest': u'/etc/ceph/ceph.client.admin.keyring', u'name': u'/etc/ceph/ceph.client.admin.keyring', u'copy_key': False}) => {\"changed\": false, \"item\": {\"copy_key\": false, \"dest\": \"/etc/ceph/ceph.client.admin.keyring\", \"name\": \"/etc/ceph/ceph.client.admin.keyring\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : set mgr key permissions] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:24\nFriday 21 September 2018 08:27:53 -0400 (0:00:00.560) 0:01:35.565 ****** \nok: [controller-0] => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 67, \"state\": \"file\", \"uid\": 167}\n\nTASK [ceph-mgr : install ceph-mgr package on RedHat or SUSE] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:2\nFriday 21 September 2018 08:27:53 -0400 (0:00:00.256) 0:01:35.822 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : install ceph mgr for debian] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:9\nFriday 21 September 2018 08:27:53 -0400 (0:00:00.061) 0:01:35.884 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : ensure systemd service override directory exists] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:17\nFriday 21 September 2018 08:27:53 -0400 (0:00:00.058) 0:01:35.942 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : add ceph-mgr systemd service overrides] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:25\nFriday 21 September 2018 08:27:53 -0400 (0:00:00.055) 0:01:35.998 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : start and add that the mgr service to the init sequence] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:35\nFriday 21 September 2018 08:27:53 -0400 (0:00:00.052) 0:01:36.051 ****** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : generate systemd unit file] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:2\nFriday 21 September 2018 08:27:53 -0400 (0:00:00.057) 0:01:36.108 ****** \nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mgr restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for controller-0\nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"168504b73edc17939666d0ef559eaab44f0382c8\", \"dest\": \"/etc/systemd/system/ceph-mgr@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"35d5093713655bbf808450ce1bb2b512\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 734, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532874.06-126295042630263/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-mgr : systemd start mgr container] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:13\nFriday 21 September 2018 08:27:54 -0400 (0:00:00.873) 0:01:36.982 ****** \nchanged: [controller-0] => {\"changed\": true, \"enabled\": true, \"name\": \"ceph-mgr@controller-0\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"docker.service systemd-journald.socket basic.target system-ceph\\\\x5cx2dmgr.slice\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph Manager\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker run --rm --net=host --memory=1g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro -e CLUSTER=ceph -e CEPH_DAEMON=MGR -e MGR_DASHBOARD=0 --name=ceph-mgr-controller-0 192.168.24.1:8787/rhceph:3-12 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm ceph-mgr-controller-0 ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStopPost\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-mgr-controller-0 ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-mgr@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-mgr@controller-0.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"127798\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"127798\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-mgr@controller-0.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dmgr.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dmgr.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\n\nTASK [ceph-mgr : get enabled modules from ceph-mgr] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:19\nFriday 21 September 2018 08:27:55 -0400 (0:00:00.539) 0:01:37.521 ****** \nchanged: [controller-0 -> 192.168.24.18] => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"--format\", \"json\", \"mgr\", \"module\", \"ls\"], \"delta\": \"0:00:00.401929\", \"end\": \"2018-09-21 12:27:56.023830\", \"rc\": 0, \"start\": \"2018-09-21 12:27:55.621901\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\\n{\\\"enabled_modules\\\":[\\\"balancer\\\",\\\"restful\\\",\\\"status\\\"],\\\"disabled_modules\\\":[\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"selftest\\\",\\\"zabbix\\\"]}\", \"stdout_lines\": [\"\", \"{\\\"enabled_modules\\\":[\\\"balancer\\\",\\\"restful\\\",\\\"status\\\"],\\\"disabled_modules\\\":[\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"selftest\\\",\\\"zabbix\\\"]}\"]}\n\nTASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:26\nFriday 21 September 2018 08:27:56 -0400 (0:00:00.666) 0:01:38.188 ****** \nok: [controller-0] => {\"ansible_facts\": {\"_ceph_mgr_modules\": {\"disabled_modules\": [\"dashboard\", \"influx\", \"localpool\", \"prometheus\", \"selftest\", \"zabbix\"], \"enabled_modules\": [\"balancer\", \"restful\", \"status\"]}}, \"changed\": false}\n\nTASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:32\nFriday 21 September 2018 08:27:56 -0400 (0:00:00.087) 0:01:38.275 ****** \nok: [controller-0] => {\"ansible_facts\": {\"_disabled_ceph_mgr_modules\": [\"dashboard\", \"influx\", \"localpool\", \"prometheus\", \"selftest\", \"zabbix\"]}, \"changed\": false}\n\nTASK [ceph-mgr : disable ceph mgr enabled modules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:38\nFriday 21 September 2018 08:27:56 -0400 (0:00:00.127) 0:01:38.402 ****** \nchanged: [controller-0 -> 192.168.24.18] => (item=balancer) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"mgr\", \"module\", \"disable\", \"balancer\"], \"delta\": \"0:00:01.287601\", \"end\": \"2018-09-21 12:27:57.788535\", \"item\": \"balancer\", \"rc\": 0, \"start\": \"2018-09-21 12:27:56.500934\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [controller-0 -> 192.168.24.18] => (item=restful) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"mgr\", \"module\", \"disable\", \"restful\"], \"delta\": \"0:00:00.816448\", \"end\": \"2018-09-21 12:27:58.784407\", \"item\": \"restful\", \"rc\": 0, \"start\": \"2018-09-21 12:27:57.967959\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nskipping: [controller-0] => (item=status) => {\"changed\": false, \"item\": \"status\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : add modules to ceph-mgr] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:49\nFriday 21 September 2018 08:27:58 -0400 (0:00:02.587) 0:01:40.990 ****** \nskipping: [controller-0] => (item=status) => {\"changed\": false, \"item\": \"status\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******\nFriday 21 September 2018 08:27:58 -0400 (0:00:00.032) 0:01:41.022 ****** \nok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************\nFriday 21 September 2018 08:27:59 -0400 (0:00:00.176) 0:01:41.199 ****** \nok: [controller-0] => {\"changed\": false, \"checksum\": \"3b92c07facdbaa789b36f850d92d7444e2bb6a27\", \"dest\": \"/tmp/restart_mgr_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"mode\": \"0750\", \"owner\": \"root\", \"path\": \"/tmp/restart_mgr_daemon.sh\", \"secontext\": \"unconfined_u:object_r:user_tmp_t:s0\", \"size\": 843, \"state\": \"file\", \"uid\": 0}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***\nFriday 21 September 2018 08:27:59 -0400 (0:00:00.593) 0:01:41.793 ****** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******\nFriday 21 September 2018 08:27:59 -0400 (0:00:00.095) 0:01:41.888 ****** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********\nFriday 21 September 2018 08:27:59 -0400 (0:00:00.139) 0:01:42.027 ****** \nok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [set ceph manager install 'Complete'] *************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:130\nFriday 21 September 2018 08:28:00 -0400 (0:00:00.295) 0:01:42.322 ****** \nok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mgr\": {\"end\": \"20180921082800Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nPLAY [osds] ********************************************************************\n\nTASK [set ceph osd install 'In Progress'] **************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:142\nFriday 21 September 2018 08:28:00 -0400 (0:00:00.160) 0:01:42.483 ****** \nok: [ceph-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_osd\": {\"start\": \"20180921082800Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [ceph-defaults : check for a mon container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2\nFriday 21 September 2018 08:28:00 -0400 (0:00:00.082) 0:01:42.565 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for an osd container] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11\nFriday 21 September 2018 08:28:00 -0400 (0:00:00.044) 0:01:42.610 ****** \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-osd-ceph-0\"], \"delta\": \"0:00:00.030174\", \"end\": \"2018-09-21 12:28:00.725215\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:28:00.695041\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for a mds container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20\nFriday 21 September 2018 08:28:00 -0400 (0:00:00.270) 0:01:42.881 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rgw container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29\nFriday 21 September 2018 08:28:00 -0400 (0:00:00.046) 0:01:42.927 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mgr container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38\nFriday 21 September 2018 08:28:00 -0400 (0:00:00.046) 0:01:42.973 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rbd mirror container] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47\nFriday 21 September 2018 08:28:00 -0400 (0:00:00.050) 0:01:43.024 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a nfs container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56\nFriday 21 September 2018 08:28:00 -0400 (0:00:00.053) 0:01:43.078 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mon socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2\nFriday 21 September 2018 08:28:01 -0400 (0:00:00.053) 0:01:43.131 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mon socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11\nFriday 21 September 2018 08:28:01 -0400 (0:00:00.050) 0:01:43.181 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21\nFriday 21 September 2018 08:28:01 -0400 (0:00:00.048) 0:01:43.229 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph osd socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30\nFriday 21 September 2018 08:28:01 -0400 (0:00:00.047) 0:01:43.277 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph osd socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40\nFriday 21 September 2018 08:28:01 -0400 (0:00:00.049) 0:01:43.327 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50\nFriday 21 September 2018 08:28:01 -0400 (0:00:00.049) 0:01:43.376 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mds socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59\nFriday 21 September 2018 08:28:01 -0400 (0:00:00.052) 0:01:43.428 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mds socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69\nFriday 21 September 2018 08:28:01 -0400 (0:00:00.046) 0:01:43.475 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79\nFriday 21 September 2018 08:28:01 -0400 (0:00:00.045) 0:01:43.521 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rgw socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88\nFriday 21 September 2018 08:28:01 -0400 (0:00:00.044) 0:01:43.565 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98\nFriday 21 September 2018 08:28:01 -0400 (0:00:00.048) 0:01:43.614 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108\nFriday 21 September 2018 08:28:01 -0400 (0:00:00.045) 0:01:43.660 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mgr socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117\nFriday 21 September 2018 08:28:01 -0400 (0:00:00.045) 0:01:43.705 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127\nFriday 21 September 2018 08:28:01 -0400 (0:00:00.046) 0:01:43.752 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137\nFriday 21 September 2018 08:28:01 -0400 (0:00:00.046) 0:01:43.798 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rbd mirror socket] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146\nFriday 21 September 2018 08:28:01 -0400 (0:00:00.045) 0:01:43.844 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156\nFriday 21 September 2018 08:28:01 -0400 (0:00:00.044) 0:01:43.888 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166\nFriday 21 September 2018 08:28:01 -0400 (0:00:00.049) 0:01:43.938 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175\nFriday 21 September 2018 08:28:01 -0400 (0:00:00.045) 0:01:43.983 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184\nFriday 21 September 2018 08:28:01 -0400 (0:00:00.043) 0:01:44.027 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194\nFriday 21 September 2018 08:28:01 -0400 (0:00:00.044) 0:01:44.072 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if it is atomic host] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2\nFriday 21 September 2018 08:28:02 -0400 (0:00:00.047) 0:01:44.119 ****** \nok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact is_atomic] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7\nFriday 21 September 2018 08:28:02 -0400 (0:00:00.236) 0:01:44.355 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11\nFriday 21 September 2018 08:28:02 -0400 (0:00:00.079) 0:01:44.435 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"monitor_name\": \"ceph-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17\nFriday 21 September 2018 08:28:02 -0400 (0:00:00.082) 0:01:44.518 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact docker_exec_cmd] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23\nFriday 21 September 2018 08:28:02 -0400 (0:00:00.079) 0:01:44.598 ****** \nok: [ceph-0 -> 192.168.24.18] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : is ceph running already?] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34\nFriday 21 September 2018 08:28:02 -0400 (0:00:00.164) 0:01:44.762 ****** \nok: [ceph-0 -> 192.168.24.18] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"-s\", \"-f\", \"json\"], \"delta\": \"0:00:00.340932\", \"end\": \"2018-09-21 12:28:03.194717\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:28:02.853785\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\\n{\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"modified\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"created\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.16:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.16:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":1,\\\"num_osds\\\":0,\\\"num_up_osds\\\":0,\\\"num_in_osds\\\":0,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[],\\\"num_pgs\\\":0,\\\"num_pools\\\":0,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":0,\\\"bytes_avail\\\":0,\\\"bytes_total\\\":0},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":7,\\\"active_gid\\\":4104,\\\"active_name\\\":\\\"controller-0\\\",\\\"active_addr\\\":\\\"172.17.3.16:6800/79\\\",\\\"available\\\":true,\\\"standbys\\\":[],\\\"modules\\\":[\\\"status\\\"],\\\"available_modules\\\":[\\\"balancer\\\",\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"restful\\\",\\\"selftest\\\",\\\"status\\\",\\\"zabbix\\\"],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\", \"stdout_lines\": [\"\", \"{\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"modified\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"created\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.16:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.16:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":1,\\\"num_osds\\\":0,\\\"num_up_osds\\\":0,\\\"num_in_osds\\\":0,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[],\\\"num_pgs\\\":0,\\\"num_pools\\\":0,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":0,\\\"bytes_avail\\\":0,\\\"bytes_total\\\":0},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":7,\\\"active_gid\\\":4104,\\\"active_name\\\":\\\"controller-0\\\",\\\"active_addr\\\":\\\"172.17.3.16:6800/79\\\",\\\"available\\\":true,\\\"standbys\\\":[],\\\"modules\\\":[\\\"status\\\"],\\\"available_modules\\\":[\\\"balancer\\\",\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"restful\\\",\\\"selftest\\\",\\\"status\\\",\\\"zabbix\\\"],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\"]}\n\nTASK [ceph-defaults : check if /var/lib/mistral/overcloud/ceph-ansible/fetch_dir directory exists] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47\nFriday 21 September 2018 08:28:03 -0400 (0:00:00.597) 0:01:45.360 ****** \nok: [ceph-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact ceph_current_status rc 1] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57\nFriday 21 September 2018 08:28:03 -0400 (0:00:00.197) 0:01:45.557 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : create a local fetch directory if it does not exist] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64\nFriday 21 September 2018 08:28:03 -0400 (0:00:00.053) 0:01:45.611 ****** \nok: [ceph-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 80, \"state\": \"directory\", \"uid\": 42430}\n\nTASK [ceph-defaults : set_fact ceph_current_status (convert to json)] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74\nFriday 21 September 2018 08:28:03 -0400 (0:00:00.214) 0:01:45.825 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_current_status\": {\"election_epoch\": 3, \"fsid\": \"8fedf068-bd95-11e8-ba69-5254006eda59\", \"fsmap\": {\"by_rank\": [], \"epoch\": 1}, \"health\": {\"checks\": {}, \"overall_status\": \"HEALTH_WARN\", \"status\": \"HEALTH_OK\", \"summary\": [{\"severity\": \"HEALTH_WARN\", \"summary\": \"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\"}]}, \"mgrmap\": {\"active_addr\": \"172.17.3.16:6800/79\", \"active_gid\": 4104, \"active_name\": \"controller-0\", \"available\": true, \"available_modules\": [\"balancer\", \"dashboard\", \"influx\", \"localpool\", \"prometheus\", \"restful\", \"selftest\", \"status\", \"zabbix\"], \"epoch\": 7, \"modules\": [\"status\"], \"services\": {}, \"standbys\": []}, \"monmap\": {\"created\": \"2018-09-21 12:27:11.445099\", \"epoch\": 1, \"features\": {\"optional\": [], \"persistent\": [\"kraken\", \"luminous\"]}, \"fsid\": \"8fedf068-bd95-11e8-ba69-5254006eda59\", \"modified\": \"2018-09-21 12:27:11.445099\", \"mons\": [{\"addr\": \"172.17.3.16:6789/0\", \"name\": \"controller-0\", \"public_addr\": \"172.17.3.16:6789/0\", \"rank\": 0}]}, \"osdmap\": {\"osdmap\": {\"epoch\": 1, \"full\": false, \"nearfull\": false, \"num_in_osds\": 0, \"num_osds\": 0, \"num_remapped_pgs\": 0, \"num_up_osds\": 0}}, \"pgmap\": {\"bytes_avail\": 0, \"bytes_total\": 0, \"bytes_used\": 0, \"data_bytes\": 0, \"num_objects\": 0, \"num_pgs\": 0, \"num_pools\": 0, \"pgs_by_state\": []}, \"quorum\": [0], \"quorum_names\": [\"controller-0\"], \"servicemap\": {\"epoch\": 1, \"modified\": \"0.000000\", \"services\": {}}}}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact fsid from ceph_current_status] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81\nFriday 21 September 2018 08:28:03 -0400 (0:00:00.083) 0:01:45.909 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"fsid\": \"8fedf068-bd95-11e8-ba69-5254006eda59\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:88\nFriday 21 September 2018 08:28:03 -0400 (0:00:00.073) 0:01:45.983 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}\n\nTASK [ceph-defaults : generate cluster fsid] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:92\nFriday 21 September 2018 08:28:03 -0400 (0:00:00.077) 0:01:46.061 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:103\nFriday 21 September 2018 08:28:03 -0400 (0:00:00.046) 0:01:46.107 ****** \nok: [ceph-0 -> localhost] => {\"changed\": false, \"cmd\": \"echo 8fedf068-bd95-11e8-ba69-5254006eda59 | tee /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"rc\": 0, \"stdout\": \"skipped, since /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\", \"stdout_lines\": [\"skipped, since /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\"]}\n\nTASK [ceph-defaults : read cluster fsid if it already exists] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:112\nFriday 21 September 2018 08:28:04 -0400 (0:00:00.198) 0:01:46.305 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:124\nFriday 21 September 2018 08:28:04 -0400 (0:00:00.042) 0:01:46.347 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:130\nFriday 21 September 2018 08:28:04 -0400 (0:00:00.041) 0:01:46.389 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"mds_name\": \"ceph-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:136\nFriday 21 September 2018 08:28:04 -0400 (0:00:00.073) 0:01:46.462 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142\nFriday 21 September 2018 08:28:04 -0400 (0:00:00.041) 0:01:46.504 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149\nFriday 21 September 2018 08:28:04 -0400 (0:00:00.073) 0:01:46.578 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156\nFriday 21 September 2018 08:28:04 -0400 (0:00:00.198) 0:01:46.777 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}\n\nTASK [ceph-defaults : resolve device link(s)] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:163\nFriday 21 September 2018 08:28:04 -0400 (0:00:00.198) 0:01:46.975 ****** \nok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdb\"], \"delta\": \"0:00:00.002909\", \"end\": \"2018-09-21 12:28:05.183223\", \"item\": \"/dev/vdb\", \"rc\": 0, \"start\": \"2018-09-21 12:28:05.180314\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdb\", \"stdout_lines\": [\"/dev/vdb\"]}\nok: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdc\"], \"delta\": \"0:00:00.003139\", \"end\": \"2018-09-21 12:28:05.337201\", \"item\": \"/dev/vdc\", \"rc\": 0, \"start\": \"2018-09-21 12:28:05.334062\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdc\", \"stdout_lines\": [\"/dev/vdc\"]}\nok: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdd\"], \"delta\": \"0:00:00.003170\", \"end\": \"2018-09-21 12:28:05.485220\", \"item\": \"/dev/vdd\", \"rc\": 0, \"start\": \"2018-09-21 12:28:05.482050\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdd\", \"stdout_lines\": [\"/dev/vdd\"]}\nok: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vde\"], \"delta\": \"0:00:00.003465\", \"end\": \"2018-09-21 12:28:05.658588\", \"item\": \"/dev/vde\", \"rc\": 0, \"start\": \"2018-09-21 12:28:05.655123\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vde\", \"stdout_lines\": [\"/dev/vde\"]}\nok: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdf\"], \"delta\": \"0:00:00.003025\", \"end\": \"2018-09-21 12:28:05.811910\", \"item\": \"/dev/vdf\", \"rc\": 0, \"start\": \"2018-09-21 12:28:05.808885\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdf\", \"stdout_lines\": [\"/dev/vdf\"]}\n\nTASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:173\nFriday 21 September 2018 08:28:05 -0400 (0:00:00.989) 0:01:47.965 ****** \nok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-09-21 12:28:05.183223', '_ansible_no_log': False, u'stdout': u'/dev/vdb', u'cmd': [u'readlink', u'-f', u'/dev/vdb'], u'rc': 0, 'item': u'/dev/vdb', u'delta': u'0:00:00.002909', '_ansible_item_label': u'/dev/vdb', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdb', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdb'], u'start': u'2018-09-21 12:28:05.180314', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdb\"], \"delta\": \"0:00:00.002909\", \"end\": \"2018-09-21 12:28:05.183223\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdb\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdb\", \"rc\": 0, \"start\": \"2018-09-21 12:28:05.180314\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdb\", \"stdout_lines\": [\"/dev/vdb\"]}}\nok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-09-21 12:28:05.337201', '_ansible_no_log': False, u'stdout': u'/dev/vdc', u'cmd': [u'readlink', u'-f', u'/dev/vdc'], u'rc': 0, 'item': u'/dev/vdc', u'delta': u'0:00:00.003139', '_ansible_item_label': u'/dev/vdc', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdc', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdc'], u'start': u'2018-09-21 12:28:05.334062', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdc\"], \"delta\": \"0:00:00.003139\", \"end\": \"2018-09-21 12:28:05.337201\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdc\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdc\", \"rc\": 0, \"start\": \"2018-09-21 12:28:05.334062\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdc\", \"stdout_lines\": [\"/dev/vdc\"]}}\nok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-09-21 12:28:05.485220', '_ansible_no_log': False, u'stdout': u'/dev/vdd', u'cmd': [u'readlink', u'-f', u'/dev/vdd'], u'rc': 0, 'item': u'/dev/vdd', u'delta': u'0:00:00.003170', '_ansible_item_label': u'/dev/vdd', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdd', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdd'], u'start': u'2018-09-21 12:28:05.482050', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdd\"], \"delta\": \"0:00:00.003170\", \"end\": \"2018-09-21 12:28:05.485220\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdd\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdd\", \"rc\": 0, \"start\": \"2018-09-21 12:28:05.482050\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdd\", \"stdout_lines\": [\"/dev/vdd\"]}}\nok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-09-21 12:28:05.658588', '_ansible_no_log': False, u'stdout': u'/dev/vde', u'cmd': [u'readlink', u'-f', u'/dev/vde'], u'rc': 0, 'item': u'/dev/vde', u'delta': u'0:00:00.003465', '_ansible_item_label': u'/dev/vde', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vde', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vde'], u'start': u'2018-09-21 12:28:05.655123', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vde\"], \"delta\": \"0:00:00.003465\", \"end\": \"2018-09-21 12:28:05.658588\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vde\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vde\", \"rc\": 0, \"start\": \"2018-09-21 12:28:05.655123\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vde\", \"stdout_lines\": [\"/dev/vde\"]}}\nok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-09-21 12:28:05.811910', '_ansible_no_log': False, u'stdout': u'/dev/vdf', u'cmd': [u'readlink', u'-f', u'/dev/vdf'], u'rc': 0, 'item': u'/dev/vdf', u'delta': u'0:00:00.003025', '_ansible_item_label': u'/dev/vdf', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdf', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdf'], u'start': u'2018-09-21 12:28:05.808885', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdf\"], \"delta\": \"0:00:00.003025\", \"end\": \"2018-09-21 12:28:05.811910\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdf\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdf\", \"rc\": 0, \"start\": \"2018-09-21 12:28:05.808885\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdf\", \"stdout_lines\": [\"/dev/vdf\"]}}\n\nTASK [ceph-defaults : set_fact build final devices list] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:182\nFriday 21 September 2018 08:28:06 -0400 (0:00:00.292) 0:01:48.257 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\"]}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190\nFriday 21 September 2018 08:28:06 -0400 (0:00:00.201) 0:01:48.459 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197\nFriday 21 September 2018 08:28:06 -0400 (0:00:00.046) 0:01:48.505 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204\nFriday 21 September 2018 08:28:06 -0400 (0:00:00.044) 0:01:48.550 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211\nFriday 21 September 2018 08:28:06 -0400 (0:00:00.051) 0:01:48.602 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:218\nFriday 21 September 2018 08:28:06 -0400 (0:00:00.054) 0:01:48.657 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rgw_hostname - fqdn] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:225\nFriday 21 September 2018 08:28:06 -0400 (0:00:00.217) 0:01:48.874 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rgw_hostname - no fqdn] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:235\nFriday 21 September 2018 08:28:06 -0400 (0:00:00.047) 0:01:48.922 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_directories] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2\nFriday 21 September 2018 08:28:06 -0400 (0:00:00.046) 0:01:48.969 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}\n\nTASK [ceph-defaults : create ceph initial directories] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18\nFriday 21 September 2018 08:28:06 -0400 (0:00:00.073) 0:01:49.042 ****** \nchanged: [ceph-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-docker-common : fail if systemd is not present] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2\nFriday 21 September 2018 08:28:09 -0400 (0:00:02.156) 0:01:51.199 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2\nFriday 21 September 2018 08:28:09 -0400 (0:00:00.049) 0:01:51.249 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11\nFriday 21 September 2018 08:28:09 -0400 (0:00:00.047) 0:01:51.297 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : warning deprecation for fqdn configuration] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:20\nFriday 21 September 2018 08:28:09 -0400 (0:00:00.047) 0:01:51.344 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove ceph udev rules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2\nFriday 21 September 2018 08:28:09 -0400 (0:00:00.048) 0:01:51.393 ****** \nok: [ceph-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}\nok: [ceph-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14\nFriday 21 September 2018 08:28:09 -0400 (0:00:00.431) 0:01:51.824 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"monitor_name\": \"ceph-0\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20\nFriday 21 September 2018 08:28:09 -0400 (0:00:00.078) 0:01:51.902 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get docker version] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26\nFriday 21 September 2018 08:28:09 -0400 (0:00:00.041) 0:01:51.943 ****** \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.021711\", \"end\": \"2018-09-21 12:28:10.039613\", \"rc\": 0, \"start\": \"2018-09-21 12:28:10.017902\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 6e3bb8e/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 6e3bb8e/1.13.1\"]}\n\nTASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32\nFriday 21 September 2018 08:28:10 -0400 (0:00:00.249) 0:01:52.193 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}\n\nTASK [ceph-docker-common : check if a cluster is already running] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42\nFriday 21 September 2018 08:28:10 -0400 (0:00:00.074) 0:01:52.267 ****** \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-ceph-0\"], \"delta\": \"0:00:00.025814\", \"end\": \"2018-09-21 12:28:10.369419\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:28:10.343605\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2\nFriday 21 September 2018 08:28:10 -0400 (0:00:00.256) 0:01:52.524 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13\nFriday 21 September 2018 08:28:10 -0400 (0:00:00.093) 0:01:52.617 ****** \nok: [ceph-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20\nFriday 21 September 2018 08:28:10 -0400 (0:00:00.154) 0:01:52.772 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25\nFriday 21 September 2018 08:28:10 -0400 (0:00:00.100) 0:01:52.873 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : stat for ceph config and keys] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30\nFriday 21 September 2018 08:28:10 -0400 (0:00:00.097) 0:01:52.970 ****** \nok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"atime\": 1537532848.0440793, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"9e373fe5b7239c71b2c20b1e9dda563cef508b10\", \"ctime\": 1537532848.0440793, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 30664835, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.0440793, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.client.admin.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 159, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}\nok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"atime\": 1537532848.2170777, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"71985a44f030d17c775335c42962737bc688e6a0\", \"ctime\": 1537532848.2160778, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 30664837, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.2160778, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mon.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 688, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"atime\": 1537532848.3970761, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"64333848b27ab8d9f98e1749b646f53ce8491e92\", \"ctime\": 1537532848.3970761, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 46865184, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.3970761, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-osd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"atime\": 1537532848.5800743, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"ad253570a945c870140d7f94eccef76f44861e59\", \"ctime\": 1537532848.5800743, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 51894543, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.5800743, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"atime\": 1537532848.7600725, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"40b83591ce4be64f55769e0a0d8aca12db95c281\", \"ctime\": 1537532848.7600725, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 55762959, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.7600725, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-mds/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"atime\": 1537532848.9380708, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"cf7920e30e8d8566b8b9f935a5f741908c23465e\", \"ctime\": 1537532848.9380708, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 60028473, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.9380708, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"atime\": 1537532872.9868395, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"f02fcb991c5a53a3bf474c15b6a514c8356b9c69\", \"ctime\": 1537532850.6510544, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 30664838, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532850.6510544, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\n\nTASK [ceph-docker-common : fail if we find existing cluster files] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5\nFriday 21 September 2018 08:28:12 -0400 (0:00:01.379) 0:01:54.349 ****** \nskipping: [ceph-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1537532848.0440793, u'block_size': 4096, u'inode': 30664835, u'isgid': False, u'size': 159, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.client.admin.keyring', u'xusr': False, u'atime': 1537532848.0440793, u'mimetype': u'unknown', u'ctime': 1537532848.0440793, u'isblk': False, u'checksum': u'9e373fe5b7239c71b2c20b1e9dda563cef508b10', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"atime\": 1537532848.0440793, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"9e373fe5b7239c71b2c20b1e9dda563cef508b10\", \"ctime\": 1537532848.0440793, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 30664835, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.0440793, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.client.admin.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 159, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/monmap-ceph'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/monmap-ceph\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1537532848.2160778, u'block_size': 4096, u'inode': 30664837, u'isgid': False, u'size': 688, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mon.keyring', u'xusr': False, u'atime': 1537532848.2170777, u'mimetype': u'unknown', u'ctime': 1537532848.2160778, u'isblk': False, u'checksum': u'71985a44f030d17c775335c42962737bc688e6a0', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"atime\": 1537532848.2170777, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"71985a44f030d17c775335c42962737bc688e6a0\", \"ctime\": 1537532848.2160778, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 30664837, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.2160778, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mon.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 688, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1537532848.3970761, u'block_size': 4096, u'inode': 46865184, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-osd/ceph.keyring', u'xusr': False, u'atime': 1537532848.3970761, u'mimetype': u'unknown', u'ctime': 1537532848.3970761, u'isblk': False, u'checksum': u'64333848b27ab8d9f98e1749b646f53ce8491e92', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"atime\": 1537532848.3970761, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"64333848b27ab8d9f98e1749b646f53ce8491e92\", \"ctime\": 1537532848.3970761, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 46865184, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.3970761, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-osd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1537532848.5800743, u'block_size': 4096, u'inode': 51894543, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'xusr': False, u'atime': 1537532848.5800743, u'mimetype': u'unknown', u'ctime': 1537532848.5800743, u'isblk': False, u'checksum': u'ad253570a945c870140d7f94eccef76f44861e59', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"atime\": 1537532848.5800743, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"ad253570a945c870140d7f94eccef76f44861e59\", \"ctime\": 1537532848.5800743, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 51894543, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.5800743, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1537532848.7600725, u'block_size': 4096, u'inode': 55762959, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-mds/ceph.keyring', u'xusr': False, u'atime': 1537532848.7600725, u'mimetype': u'unknown', u'ctime': 1537532848.7600725, u'isblk': False, u'checksum': u'40b83591ce4be64f55769e0a0d8aca12db95c281', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"atime\": 1537532848.7600725, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"40b83591ce4be64f55769e0a0d8aca12db95c281\", \"ctime\": 1537532848.7600725, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 55762959, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.7600725, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-mds/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1537532848.9380708, u'block_size': 4096, u'inode': 60028473, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'xusr': False, u'atime': 1537532848.9380708, u'mimetype': u'unknown', u'ctime': 1537532848.9380708, u'isblk': False, u'checksum': u'cf7920e30e8d8566b8b9f935a5f741908c23465e', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"atime\": 1537532848.9380708, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"cf7920e30e8d8566b8b9f935a5f741908c23465e\", \"ctime\": 1537532848.9380708, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 60028473, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.9380708, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1537532850.6510544, u'block_size': 4096, u'inode': 30664838, u'isgid': False, u'size': 67, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mgr.controller-0.keyring', u'xusr': False, u'atime': 1537532872.9868395, u'mimetype': u'unknown', u'ctime': 1537532850.6510544, u'isblk': False, u'checksum': u'f02fcb991c5a53a3bf474c15b6a514c8356b9c69', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mgr.controller-0.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"atime\": 1537532872.9868395, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"f02fcb991c5a53a3bf474c15b6a514c8356b9c69\", \"ctime\": 1537532850.6510544, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 30664838, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532850.6510544, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on atomic] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2\nFriday 21 September 2018 08:28:12 -0400 (0:00:00.398) 0:01:54.748 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6\nFriday 21 September 2018 08:28:12 -0400 (0:00:00.055) 0:01:54.803 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on redhat or suse] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2\nFriday 21 September 2018 08:28:12 -0400 (0:00:00.050) 0:01:54.854 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on redhat or suse] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13\nFriday 21 September 2018 08:28:12 -0400 (0:00:00.053) 0:01:54.907 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7\nFriday 21 September 2018 08:28:12 -0400 (0:00:00.057) 0:01:54.965 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on debian] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2\nFriday 21 September 2018 08:28:12 -0400 (0:00:00.053) 0:01:55.018 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on debian] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11\nFriday 21 September 2018 08:28:12 -0400 (0:00:00.050) 0:01:55.068 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7\nFriday 21 September 2018 08:28:13 -0400 (0:00:00.052) 0:01:55.121 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mon container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3\nFriday 21 September 2018 08:28:13 -0400 (0:00:00.051) 0:01:55.173 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph osd container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12\nFriday 21 September 2018 08:28:13 -0400 (0:00:00.050) 0:01:55.224 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mds container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21\nFriday 21 September 2018 08:28:13 -0400 (0:00:00.065) 0:01:55.289 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rgw container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30\nFriday 21 September 2018 08:28:13 -0400 (0:00:00.048) 0:01:55.337 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mgr container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39\nFriday 21 September 2018 08:28:13 -0400 (0:00:00.052) 0:01:55.389 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rbd mirror container] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48\nFriday 21 September 2018 08:28:13 -0400 (0:00:00.054) 0:01:55.444 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph nfs container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57\nFriday 21 September 2018 08:28:13 -0400 (0:00:00.047) 0:01:55.491 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67\nFriday 21 September 2018 08:28:13 -0400 (0:00:00.051) 0:01:55.543 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76\nFriday 21 September 2018 08:28:13 -0400 (0:00:00.068) 0:01:55.612 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85\nFriday 21 September 2018 08:28:13 -0400 (0:00:00.058) 0:01:55.670 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94\nFriday 21 September 2018 08:28:13 -0400 (0:00:00.053) 0:01:55.723 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103\nFriday 21 September 2018 08:28:13 -0400 (0:00:00.049) 0:01:55.773 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112\nFriday 21 September 2018 08:28:13 -0400 (0:00:00.053) 0:01:55.827 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121\nFriday 21 September 2018 08:28:13 -0400 (0:00:00.050) 0:01:55.877 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130\nFriday 21 September 2018 08:28:13 -0400 (0:00:00.049) 0:01:55.926 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137\nFriday 21 September 2018 08:28:13 -0400 (0:00:00.045) 0:01:55.972 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144\nFriday 21 September 2018 08:28:13 -0400 (0:00:00.047) 0:01:56.020 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151\nFriday 21 September 2018 08:28:13 -0400 (0:00:00.044) 0:01:56.065 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158\nFriday 21 September 2018 08:28:13 -0400 (0:00:00.045) 0:01:56.110 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165\nFriday 21 September 2018 08:28:14 -0400 (0:00:00.044) 0:01:56.154 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172\nFriday 21 September 2018 08:28:14 -0400 (0:00:00.046) 0:01:56.201 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-12 image] ********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179\nFriday 21 September 2018 08:28:14 -0400 (0:00:00.050) 0:01:56.251 ****** \nok: [ceph-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:13.428064\", \"end\": \"2018-09-21 12:28:27.762592\", \"rc\": 0, \"start\": \"2018-09-21 12:28:14.334528\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-12: Pulling from 192.168.24.1:8787/rhceph\\n428a9ca37f0e: Pulling fs layer\\n8115a58d83bd: Pulling fs layer\\n5e409f26eefe: Pulling fs layer\\n8115a58d83bd: Verifying Checksum\\n8115a58d83bd: Download complete\\n428a9ca37f0e: Verifying Checksum\\n428a9ca37f0e: Download complete\\n5e409f26eefe: Verifying Checksum\\n5e409f26eefe: Download complete\\n428a9ca37f0e: Pull complete\\n8115a58d83bd: Pull complete\\n5e409f26eefe: Pull complete\\nDigest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-12: Pulling from 192.168.24.1:8787/rhceph\", \"428a9ca37f0e: Pulling fs layer\", \"8115a58d83bd: Pulling fs layer\", \"5e409f26eefe: Pulling fs layer\", \"8115a58d83bd: Verifying Checksum\", \"8115a58d83bd: Download complete\", \"428a9ca37f0e: Verifying Checksum\", \"428a9ca37f0e: Download complete\", \"5e409f26eefe: Verifying Checksum\", \"5e409f26eefe: Download complete\", \"428a9ca37f0e: Pull complete\", \"8115a58d83bd: Pull complete\", \"5e409f26eefe: Pull complete\", \"Digest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\"]}\n\nTASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-12 image after pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189\nFriday 21 September 2018 08:28:27 -0400 (0:00:13.670) 0:02:09.921 ****** \nchanged: [ceph-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:00.027124\", \"end\": \"2018-09-21 12:28:28.018959\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:28:27.991835\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-12\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 592066185,\\n \\\"VirtualSize\\\": 592066185,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/c1c30ce1dcf2b7db29c713c8a41824356ead2dbe1c9dfd97aa3ee642074fcf4b/diff:/var/lib/docker/overlay2/45f63713c0446d74ff6d3c6aa0b1aa2ab1c61cb75d4ebd421a02603488f56496/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/45fea4bde2d2f33c81b8fe348902856f4ce88b498bac5ed1649ee15ef4a1574d/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/45fea4bde2d2f33c81b8fe348902856f4ce88b498bac5ed1649ee15ef4a1574d/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/45fea4bde2d2f33c81b8fe348902856f4ce88b498bac5ed1649ee15ef4a1574d/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\\n \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\\n \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-12\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 592066185,\", \" \\\"VirtualSize\\\": 592066185,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/c1c30ce1dcf2b7db29c713c8a41824356ead2dbe1c9dfd97aa3ee642074fcf4b/diff:/var/lib/docker/overlay2/45f63713c0446d74ff6d3c6aa0b1aa2ab1c61cb75d4ebd421a02603488f56496/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/45fea4bde2d2f33c81b8fe348902856f4ce88b498bac5ed1649ee15ef4a1574d/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/45fea4bde2d2f33c81b8fe348902856f4ce88b498bac5ed1649ee15ef4a1574d/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/45fea4bde2d2f33c81b8fe348902856f4ce88b498bac5ed1649ee15ef4a1574d/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\", \" \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\", \" \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194\nFriday 21 September 2018 08:28:28 -0400 (0:00:00.373) 0:02:10.295 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200\nFriday 21 September 2018 08:28:28 -0400 (0:00:00.186) 0:02:10.481 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211\nFriday 21 September 2018 08:28:28 -0400 (0:00:00.048) 0:02:10.529 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222\nFriday 21 September 2018 08:28:28 -0400 (0:00:00.054) 0:02:10.584 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233\nFriday 21 September 2018 08:28:28 -0400 (0:00:00.048) 0:02:10.633 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244\nFriday 21 September 2018 08:28:28 -0400 (0:00:00.050) 0:02:10.683 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255\nFriday 21 September 2018 08:28:28 -0400 (0:00:00.053) 0:02:10.736 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266\nFriday 21 September 2018 08:28:28 -0400 (0:00:00.050) 0:02:10.787 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : export local ceph dev image] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277\nFriday 21 September 2018 08:28:28 -0400 (0:00:00.051) 0:02:10.838 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : copy ceph dev image file] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285\nFriday 21 September 2018 08:28:28 -0400 (0:00:00.046) 0:02:10.885 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : load ceph dev image] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292\nFriday 21 September 2018 08:28:28 -0400 (0:00:00.045) 0:02:10.931 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove tmp ceph dev image file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297\nFriday 21 September 2018 08:28:28 -0400 (0:00:00.046) 0:02:10.977 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get ceph version] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84\nFriday 21 September 2018 08:28:28 -0400 (0:00:00.054) 0:02:11.032 ****** \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-12\", \"--version\"], \"delta\": \"0:00:00.427005\", \"end\": \"2018-09-21 12:28:29.637584\", \"rc\": 0, \"start\": \"2018-09-21 12:28:29.210579\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\"]}\n\nTASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90\nFriday 21 September 2018 08:28:29 -0400 (0:00:00.761) 0:02:11.793 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-42.el7cp\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release jewel] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2\nFriday 21 September 2018 08:28:29 -0400 (0:00:00.080) 0:02:11.873 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release kraken] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8\nFriday 21 September 2018 08:28:29 -0400 (0:00:00.048) 0:02:11.922 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release luminous] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14\nFriday 21 September 2018 08:28:29 -0400 (0:00:00.053) 0:02:11.976 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release mimic] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20\nFriday 21 September 2018 08:28:30 -0400 (0:00:00.192) 0:02:12.168 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release nautilus] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26\nFriday 21 September 2018 08:28:30 -0400 (0:00:00.047) 0:02:12.216 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : create bootstrap directories] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2\nFriday 21 September 2018 08:28:30 -0400 (0:00:00.047) 0:02:12.264 ****** \nchanged: [ceph-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\n\nTASK [ceph-config : c >reate ceph conf directory] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4\nFriday 21 September 2018 08:28:31 -0400 (0:00:00.945) 0:02:13.209 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate ceph configuration file: ceph.conf] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12\nFriday 21 September 2018 08:28:31 -0400 (0:00:00.051) 0:02:13.260 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : create a local fetch directory if it does not exist] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38\nFriday 21 September 2018 08:28:31 -0400 (0:00:00.052) 0:02:13.312 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate cluster uuid] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54\nFriday 21 September 2018 08:28:31 -0400 (0:00:00.059) 0:02:13.372 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : read cluster uuid if it already exists] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64\nFriday 21 September 2018 08:28:31 -0400 (0:00:00.053) 0:02:13.425 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nFriday 21 September 2018 08:28:31 -0400 (0:00:00.054) 0:02:13.480 ****** \nchanged: [ceph-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nFriday 21 September 2018 08:28:31 -0400 (0:00:00.327) 0:02:13.808 ****** \nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy mon restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy osd restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy mds restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy rgw restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy mgr restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for ceph-0\nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"405e62fe566533b00313a76f366c912348a265e6\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"f7a4e6d34b91a8adf314d24533355d85\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 1213, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532911.75-113649126647026/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-config : set fsid fact when generate_fsid = true] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102\nFriday 21 September 2018 08:28:33 -0400 (0:00:02.165) 0:02:15.974 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure public_network configured] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:2\nFriday 21 September 2018 08:28:33 -0400 (0:00:00.050) 0:02:16.024 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure cluster_network configured] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:8\nFriday 21 September 2018 08:28:33 -0400 (0:00:00.048) 0:02:16.073 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure journal_size configured] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:15\nFriday 21 September 2018 08:28:34 -0400 (0:00:00.049) 0:02:16.123 ****** \nok: [ceph-0] => {\n \"msg\": \"WARNING: journal_size is configured to 512, which is less than 5GB. This is not recommended and can lead to severe issues.\"\n}\n\nTASK [ceph-osd : make sure an osd scenario was chosen] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:23\nFriday 21 September 2018 08:28:34 -0400 (0:00:00.095) 0:02:16.218 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure a valid osd scenario was chosen] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:31\nFriday 21 September 2018 08:28:34 -0400 (0:00:00.052) 0:02:16.271 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : verify devices have been provided] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:39\nFriday 21 September 2018 08:28:34 -0400 (0:00:00.054) 0:02:16.326 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : check if osd_scenario lvm is supported by the selected ceph version] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:49\nFriday 21 September 2018 08:28:34 -0400 (0:00:00.058) 0:02:16.385 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : verify lvm_volumes have been provided] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:59\nFriday 21 September 2018 08:28:34 -0400 (0:00:00.053) 0:02:16.439 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure the lvm_volumes variable is a list] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:69\nFriday 21 September 2018 08:28:34 -0400 (0:00:00.051) 0:02:16.490 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure the devices variable is a list] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:79\nFriday 21 September 2018 08:28:34 -0400 (0:00:00.068) 0:02:16.559 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : verify dedicated devices have been provided] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:88\nFriday 21 September 2018 08:28:34 -0400 (0:00:00.058) 0:02:16.617 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure the dedicated_devices variable is a list] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:98\nFriday 21 September 2018 08:28:34 -0400 (0:00:00.056) 0:02:16.673 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : check if bluestore is supported by the selected ceph version] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:109\nFriday 21 September 2018 08:28:34 -0400 (0:00:00.054) 0:02:16.728 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include system_tuning.yml] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:5\nFriday 21 September 2018 08:28:34 -0400 (0:00:00.059) 0:02:16.787 ****** \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml for ceph-0\n\nTASK [ceph-osd : disable osd directory parsing by updatedb] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:2\nFriday 21 September 2018 08:28:34 -0400 (0:00:00.093) 0:02:16.880 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : disable osd directory path in updatedb.conf] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:11\nFriday 21 September 2018 08:28:34 -0400 (0:00:00.052) 0:02:16.933 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : create tmpfiles.d directory] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:22\nFriday 21 September 2018 08:28:34 -0400 (0:00:00.052) 0:02:16.985 ****** \nok: [ceph-0] => {\"changed\": false, \"gid\": 0, \"group\": \"root\", \"mode\": \"0755\", \"owner\": \"root\", \"path\": \"/etc/tmpfiles.d\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 0}\n\nTASK [ceph-osd : disable transparent hugepage] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:33\nFriday 21 September 2018 08:28:35 -0400 (0:00:00.233) 0:02:17.218 ****** \nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"e000059a4cfd8ce350b13f14305a46eaf99849ba\", \"dest\": \"/etc/tmpfiles.d/ceph_transparent_hugepage.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"21ac872f3aa1fb44b01d4f7ab00a35fc\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 158, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532915.28-40963815083215/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-osd : get default vm.min_free_kbytes] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:45\nFriday 21 September 2018 08:28:35 -0400 (0:00:00.651) 0:02:17.870 ****** \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"sysctl\", \"-b\", \"vm.min_free_kbytes\"], \"delta\": \"0:00:00.004872\", \"end\": \"2018-09-21 12:28:36.065392\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:28:36.060520\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"67584\", \"stdout_lines\": [\"67584\"]}\n\nTASK [ceph-osd : set_fact vm_min_free_kbytes] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:52\nFriday 21 September 2018 08:28:36 -0400 (0:00:00.351) 0:02:18.221 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"vm_min_free_kbytes\": \"67584\"}, \"changed\": false}\n\nTASK [ceph-osd : apply operating system tuning] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:56\nFriday 21 September 2018 08:28:36 -0400 (0:00:00.198) 0:02:18.420 ****** \nchanged: [ceph-0] => (item={u'enable': u\"(osd_objectstore == 'bluestore')\", u'name': u'fs.aio-max-nr', u'value': u'1048576'}) => {\"changed\": true, \"item\": {\"enable\": \"(osd_objectstore == 'bluestore')\", \"name\": \"fs.aio-max-nr\", \"value\": \"1048576\"}}\nchanged: [ceph-0] => (item={u'name': u'fs.file-max', u'value': 26234859}) => {\"changed\": true, \"item\": {\"name\": \"fs.file-max\", \"value\": 26234859}}\nchanged: [ceph-0] => (item={u'name': u'vm.zone_reclaim_mode', u'value': 0}) => {\"changed\": true, \"item\": {\"name\": \"vm.zone_reclaim_mode\", \"value\": 0}}\nchanged: [ceph-0] => (item={u'name': u'vm.swappiness', u'value': 10}) => {\"changed\": true, \"item\": {\"name\": \"vm.swappiness\", \"value\": 10}}\nchanged: [ceph-0] => (item={u'name': u'vm.min_free_kbytes', u'value': u'67584'}) => {\"changed\": true, \"item\": {\"name\": \"vm.min_free_kbytes\", \"value\": \"67584\"}}\n\nTASK [ceph-osd : install dependencies] *****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:10\nFriday 21 September 2018 08:28:37 -0400 (0:00:01.168) 0:02:19.588 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include common.yml] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:18\nFriday 21 September 2018 08:28:37 -0400 (0:00:00.050) 0:02:19.639 ****** \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml for ceph-0\n\nTASK [ceph-osd : create bootstrap-osd and osd directories] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml:2\nFriday 21 September 2018 08:28:37 -0400 (0:00:00.171) 0:02:19.810 ****** \nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nok: [ceph-0] => (item=/var/lib/ceph/osd/) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-osd : copy ceph key(s) if needed] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml:15\nFriday 21 September 2018 08:28:38 -0400 (0:00:00.408) 0:02:20.219 ****** \nchanged: [ceph-0] => (item={u'name': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'copy_key': True}) => {\"changed\": true, \"checksum\": \"64333848b27ab8d9f98e1749b646f53ce8491e92\", \"dest\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"copy_key\": true, \"name\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\"}, \"md5sum\": \"d0dcfd5572ae39eb0ce251488182ec1b\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 113, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532918.16-87817898207575/source\", \"state\": \"file\", \"uid\": 167}\nskipping: [ceph-0] => (item={u'name': u'/etc/ceph/ceph.client.admin.keyring', u'copy_key': False}) => {\"changed\": false, \"item\": {\"copy_key\": false, \"name\": \"/etc/ceph/ceph.client.admin.keyring\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --bluestore'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:2\nFriday 21 September 2018 08:28:38 -0400 (0:00:00.534) 0:02:20.753 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options 'ceph_disk_cli_options'] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:11\nFriday 21 September 2018 08:28:38 -0400 (0:00:00.048) 0:02:20.802 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph'] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:20\nFriday 21 September 2018 08:28:38 -0400 (0:00:00.069) 0:02:20.871 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --bluestore --dmcrypt'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:29\nFriday 21 September 2018 08:28:38 -0400 (0:00:00.059) 0:02:20.930 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --filestore --dmcrypt'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:38\nFriday 21 September 2018 08:28:38 -0400 (0:00:00.051) 0:02:20.981 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --dmcrypt'] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:47\nFriday 21 September 2018 08:28:38 -0400 (0:00:00.052) 0:02:21.034 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact docker_env_args '-e KV_TYPE=etcd -e KV_IP=127.0.0.1 -e KV_PORT=2379'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:56\nFriday 21 September 2018 08:28:38 -0400 (0:00:00.054) 0:02:21.089 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:62\nFriday 21 September 2018 08:28:39 -0400 (0:00:00.048) 0:02:21.137 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"docker_env_args\": \"-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0\"}, \"changed\": false}\n\nTASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=1'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:70\nFriday 21 September 2018 08:28:39 -0400 (0:00:00.085) 0:02:21.222 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:78\nFriday 21 September 2018 08:28:39 -0400 (0:00:00.053) 0:02:21.275 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=1'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:86\nFriday 21 September 2018 08:28:39 -0400 (0:00:00.046) 0:02:21.322 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact devices generate device list when osd_auto_discovery] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:2\nFriday 21 September 2018 08:28:39 -0400 (0:00:00.048) 0:02:21.370 ****** \nskipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'20971520', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {u'vda1': {u'sectorsize': 512, u'uuid': u'2018-09-21-08-09-59-00', u'links': {u'masters': [], u'labels': [u'config-2'], u'ids': [], u'uuids': [u'2018-09-21-08-09-59-00']}, u'sectors': u'2048', u'start': u'2048', u'holders': [], u'size': u'1.00 MB'}, u'vda2': {u'sectorsize': 512, u'uuid': u'db072aa5-689e-4872-9a7a-742ec4624465', u'links': {u'masters': [], u'labels': [u'img-rootfs'], u'ids': [], u'uuids': [u'db072aa5-689e-4872-9a7a-742ec4624465']}, u'sectors': u'20967391', u'start': u'4096', u'holders': [], u'size': u'10.00 GB'}}, u'holders': [], u'size': u'10.00 GB'}, 'key': u'vda'}) => {\"changed\": false, \"item\": {\"key\": \"vda\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {\"vda1\": {\"holders\": [], \"links\": {\"ids\": [], \"labels\": [\"config-2\"], \"masters\": [], \"uuids\": [\"2018-09-21-08-09-59-00\"]}, \"sectors\": \"2048\", \"sectorsize\": 512, \"size\": \"1.00 MB\", \"start\": \"2048\", \"uuid\": \"2018-09-21-08-09-59-00\"}, \"vda2\": {\"holders\": [], \"links\": {\"ids\": [], \"labels\": [\"img-rootfs\"], \"masters\": [], \"uuids\": [\"db072aa5-689e-4872-9a7a-742ec4624465\"]}, \"sectors\": \"20967391\", \"sectorsize\": 512, \"size\": \"10.00 GB\", \"start\": \"4096\", \"uuid\": \"db072aa5-689e-4872-9a7a-742ec4624465\"}}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"20971520\", \"sectorsize\": \"512\", \"size\": \"10.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdc'}) => {\"changed\": false, \"item\": {\"key\": \"vdc\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdb'}) => {\"changed\": false, \"item\": {\"key\": \"vdb\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vde'}) => {\"changed\": false, \"item\": {\"key\": \"vde\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdd'}) => {\"changed\": false, \"item\": {\"key\": \"vdd\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdf'}) => {\"changed\": false, \"item\": {\"key\": \"vdf\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : resolve dedicated device link(s)] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:15\nFriday 21 September 2018 08:28:39 -0400 (0:00:00.103) 0:02:21.473 ****** \n\nTASK [ceph-osd : set_fact build dedicated_devices from resolved symlinks] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:24\nFriday 21 September 2018 08:28:39 -0400 (0:00:00.046) 0:02:21.520 ****** \n\nTASK [ceph-osd : set_fact build final dedicated_devices list] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:32\nFriday 21 September 2018 08:28:39 -0400 (0:00:00.051) 0:02:21.571 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : read information about the devices] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:29\nFriday 21 September 2018 08:28:39 -0400 (0:00:00.048) 0:02:21.619 ****** \nok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}\nok: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdc\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdc\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}\nok: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdd\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdd\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}\nok: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vde\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vde\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}\nok: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdf\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdf\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}\n\nTASK [ceph-osd : check the partition status of the osd disks] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml:2\nFriday 21 September 2018 08:28:40 -0400 (0:00:01.149) 0:02:22.769 ****** \nok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdb\"], \"delta\": \"0:00:00.007086\", \"end\": \"2018-09-21 12:28:40.858573\", \"failed_when_result\": false, \"item\": \"/dev/vdb\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:28:40.851487\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdc\"], \"delta\": \"0:00:00.007443\", \"end\": \"2018-09-21 12:28:41.031081\", \"failed_when_result\": false, \"item\": \"/dev/vdc\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:28:41.023638\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdd\"], \"delta\": \"0:00:00.006857\", \"end\": \"2018-09-21 12:28:41.197648\", \"failed_when_result\": false, \"item\": \"/dev/vdd\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:28:41.190791\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vde\"], \"delta\": \"0:00:00.006872\", \"end\": \"2018-09-21 12:28:41.357694\", \"failed_when_result\": false, \"item\": \"/dev/vde\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:28:41.350822\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdf\"], \"delta\": \"0:00:00.007263\", \"end\": \"2018-09-21 12:28:41.521074\", \"failed_when_result\": false, \"item\": \"/dev/vdf\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:28:41.513811\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : create gpt disk label] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml:11\nFriday 21 September 2018 08:28:41 -0400 (0:00:00.902) 0:02:23.672 ****** \nok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdb'], u'end': u'2018-09-21 12:28:40.858573', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdb', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdb', u'delta': u'0:00:00.007086', '_ansible_item_label': u'/dev/vdb', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-09-21 12:28:40.851487', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdb']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdb\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.008661\", \"end\": \"2018-09-21 12:28:41.771110\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdb\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdb\"], \"delta\": \"0:00:00.007086\", \"end\": \"2018-09-21 12:28:40.858573\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdb\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdb\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:28:40.851487\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdb\"], \"rc\": 0, \"start\": \"2018-09-21 12:28:41.762449\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdc'], u'end': u'2018-09-21 12:28:41.031081', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdc', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdc', u'delta': u'0:00:00.007443', '_ansible_item_label': u'/dev/vdc', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-09-21 12:28:41.023638', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdc']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdc\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.008585\", \"end\": \"2018-09-21 12:28:41.951415\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdc\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdc\"], \"delta\": \"0:00:00.007443\", \"end\": \"2018-09-21 12:28:41.031081\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdc\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdc\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:28:41.023638\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdc\"], \"rc\": 0, \"start\": \"2018-09-21 12:28:41.942830\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdd'], u'end': u'2018-09-21 12:28:41.197648', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdd', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdd', u'delta': u'0:00:00.006857', '_ansible_item_label': u'/dev/vdd', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-09-21 12:28:41.190791', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdd']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdd\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.008970\", \"end\": \"2018-09-21 12:28:42.130162\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdd\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdd\"], \"delta\": \"0:00:00.006857\", \"end\": \"2018-09-21 12:28:41.197648\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdd\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdd\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:28:41.190791\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdd\"], \"rc\": 0, \"start\": \"2018-09-21 12:28:42.121192\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vde'], u'end': u'2018-09-21 12:28:41.357694', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vde', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vde', u'delta': u'0:00:00.006872', '_ansible_item_label': u'/dev/vde', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-09-21 12:28:41.350822', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vde']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vde\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.008350\", \"end\": \"2018-09-21 12:28:42.313674\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vde\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vde\"], \"delta\": \"0:00:00.006872\", \"end\": \"2018-09-21 12:28:41.357694\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vde\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vde\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:28:41.350822\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vde\"], \"rc\": 0, \"start\": \"2018-09-21 12:28:42.305324\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdf'], u'end': u'2018-09-21 12:28:41.521074', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdf', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdf', u'delta': u'0:00:00.007263', '_ansible_item_label': u'/dev/vdf', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-09-21 12:28:41.513811', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdf']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdf\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.008257\", \"end\": \"2018-09-21 12:28:42.491823\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdf\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdf\"], \"delta\": \"0:00:00.007263\", \"end\": \"2018-09-21 12:28:41.521074\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdf\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdf\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:28:41.513811\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdf\"], \"rc\": 0, \"start\": \"2018-09-21 12:28:42.483566\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : include scenarios/collocated.yml] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:41\nFriday 21 September 2018 08:28:42 -0400 (0:00:00.985) 0:02:24.657 ****** \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml for ceph-0\n\nTASK [ceph-osd : prepare ceph containerized osd disk collocated] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:5\nFriday 21 September 2018 08:28:42 -0400 (0:00:00.082) 0:02:24.740 ****** \nchanged: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdb', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdb', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdb', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdb', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdb']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdb -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdb -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-12\", \"delta\": \"0:00:06.957095\", \"end\": \"2018-09-21 12:28:49.786537\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdb\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdb\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdb\"], \"rc\": 0, \"start\": \"2018-09-21 12:28:42.829442\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-09-21 12:28:43'\\n+common_functions.sh:13: log(): echo '2018-09-21 12:28:43 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdb ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdb ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdb print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 0b62174e-f684-4a6d-bc2d-fff315b60dee /dev/vdb\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdb\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdb\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:0b62174e-f684-4a6d-bc2d-fff315b60dee --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdb\\nupdate_partition: Calling partprobe on created device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/0b62174e-f684-4a6d-bc2d-fff315b60dee\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdb\\nupdate_partition: Calling partprobe on prepared device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/0b62174e-f684-4a6d-bc2d-fff315b60dee\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdb\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdb\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:ec758399-cbe4-4b08-8b07-b0e37f81e386 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdb\\nupdate_partition: Calling partprobe on created device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdb1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdb1\\nmount: Mounting /dev/vdb1 on /var/lib/ceph/tmp/mnt.FdntmT with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdb1 /var/lib/ceph/tmp/mnt.FdntmT\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.FdntmT\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.FdntmT\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT/ceph_fsid.18899.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT/ceph_fsid.18899.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT/fsid.18899.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT/fsid.18899.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT/magic.18899.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT/magic.18899.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT/journal_uuid.18899.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT/journal_uuid.18899.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.FdntmT/journal -> /dev/disk/by-partuuid/0b62174e-f684-4a6d-bc2d-fff315b60dee\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT/type.18899.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT/type.18899.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.FdntmT\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.FdntmT\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdb\\nupdate_partition: Calling partprobe on prepared device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdb1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb2 ]; do echo '\\\\''Waiting for /dev/vdb2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb1 ]; do echo '\\\\''Waiting for /dev/vdb1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-09-21 12:28:43'\", \"+common_functions.sh:13: log(): echo '2018-09-21 12:28:43 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdb ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdb ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdb print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 0b62174e-f684-4a6d-bc2d-fff315b60dee /dev/vdb\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdb\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdb\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:0b62174e-f684-4a6d-bc2d-fff315b60dee --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdb\", \"update_partition: Calling partprobe on created device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/0b62174e-f684-4a6d-bc2d-fff315b60dee\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdb\", \"update_partition: Calling partprobe on prepared device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/0b62174e-f684-4a6d-bc2d-fff315b60dee\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdb\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdb\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:ec758399-cbe4-4b08-8b07-b0e37f81e386 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdb\", \"update_partition: Calling partprobe on created device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdb1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdb1\", \"mount: Mounting /dev/vdb1 on /var/lib/ceph/tmp/mnt.FdntmT with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdb1 /var/lib/ceph/tmp/mnt.FdntmT\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.FdntmT\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.FdntmT\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT/ceph_fsid.18899.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT/ceph_fsid.18899.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT/fsid.18899.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT/fsid.18899.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT/magic.18899.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT/magic.18899.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT/journal_uuid.18899.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT/journal_uuid.18899.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.FdntmT/journal -> /dev/disk/by-partuuid/0b62174e-f684-4a6d-bc2d-fff315b60dee\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT/type.18899.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT/type.18899.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.FdntmT\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.FdntmT\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdb\", \"update_partition: Calling partprobe on prepared device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdb1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb2 ]; do echo '\\\\''Waiting for /dev/vdb2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb1 ]; do echo '\\\\''Waiting for /dev/vdb1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-09-21 12:28:43 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-09-21 12:28:43 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-09-21 12:28:43 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-09-21 12:28:43 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdb\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/mon/ceph-ceph-0' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/mds/ceph-ceph-0' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.pqI4DtXAXN' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-rgw' from 64045:64045 to ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-mds' from 64045:64045 to ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-rbd' from 64045:64045 to ceph:ceph\\nchanged ownership of '/var/lib/ceph/mgr' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/mgr/ceph-ceph-0' from root:root to ceph:ceph\\n2018-09-21 12:28:43 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdb1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdb2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdb1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-09-21 12:28:43 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-09-21 12:28:43 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-09-21 12:28:43 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-09-21 12:28:43 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdb\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/mon/ceph-ceph-0' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/mds/ceph-ceph-0' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.pqI4DtXAXN' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-rgw' from 64045:64045 to ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-mds' from 64045:64045 to ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-rbd' from 64045:64045 to ceph:ceph\", \"changed ownership of '/var/lib/ceph/mgr' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/mgr/ceph-ceph-0' from root:root to ceph:ceph\", \"2018-09-21 12:28:43 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdb1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdb2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdb1' from root:disk to ceph:ceph\"]}\nchanged: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdc', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdc', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdc', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdc', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdc']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdc -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdc -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-12\", \"delta\": \"0:00:06.708144\", \"end\": \"2018-09-21 12:28:56.666436\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdc\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdc\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdc\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdc\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdc\"], \"rc\": 0, \"start\": \"2018-09-21 12:28:49.958292\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-09-21 12:28:50'\\n+common_functions.sh:13: log(): echo '2018-09-21 12:28:50 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdc ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdc ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdc print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid bf1ed448-2528-4280-b531-ac91f3488886 /dev/vdc\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdc\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdc\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:bf1ed448-2528-4280-b531-ac91f3488886 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdc\\nupdate_partition: Calling partprobe on created device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc2 uuid path is /sys/dev/block/252:34/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/bf1ed448-2528-4280-b531-ac91f3488886\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdc\\nupdate_partition: Calling partprobe on prepared device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/bf1ed448-2528-4280-b531-ac91f3488886\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdc\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdc\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:295d4e75-e479-45c2-9091-c3be07a5e1a8 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdc\\nupdate_partition: Calling partprobe on created device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc1 uuid path is /sys/dev/block/252:33/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdc1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdc1\\nmount: Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.NS0aNq with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdc1 /var/lib/ceph/tmp/mnt.NS0aNq\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.NS0aNq\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.NS0aNq\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq/ceph_fsid.19157.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq/ceph_fsid.19157.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq/fsid.19157.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq/fsid.19157.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq/magic.19157.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq/magic.19157.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq/journal_uuid.19157.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq/journal_uuid.19157.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.NS0aNq/journal -> /dev/disk/by-partuuid/bf1ed448-2528-4280-b531-ac91f3488886\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq/type.19157.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq/type.19157.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.NS0aNq\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.NS0aNq\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdc\\nupdate_partition: Calling partprobe on prepared device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdc1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc2 ]; do echo '\\\\''Waiting for /dev/vdc2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc1 ]; do echo '\\\\''Waiting for /dev/vdc1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-09-21 12:28:50'\", \"+common_functions.sh:13: log(): echo '2018-09-21 12:28:50 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdc ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdc ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdc print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid bf1ed448-2528-4280-b531-ac91f3488886 /dev/vdc\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdc\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdc\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:bf1ed448-2528-4280-b531-ac91f3488886 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdc\", \"update_partition: Calling partprobe on created device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc2 uuid path is /sys/dev/block/252:34/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/bf1ed448-2528-4280-b531-ac91f3488886\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdc\", \"update_partition: Calling partprobe on prepared device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/bf1ed448-2528-4280-b531-ac91f3488886\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdc\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdc\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:295d4e75-e479-45c2-9091-c3be07a5e1a8 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdc\", \"update_partition: Calling partprobe on created device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc1 uuid path is /sys/dev/block/252:33/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdc1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdc1\", \"mount: Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.NS0aNq with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdc1 /var/lib/ceph/tmp/mnt.NS0aNq\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.NS0aNq\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.NS0aNq\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq/ceph_fsid.19157.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq/ceph_fsid.19157.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq/fsid.19157.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq/fsid.19157.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq/magic.19157.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq/magic.19157.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq/journal_uuid.19157.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq/journal_uuid.19157.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.NS0aNq/journal -> /dev/disk/by-partuuid/bf1ed448-2528-4280-b531-ac91f3488886\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq/type.19157.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq/type.19157.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.NS0aNq\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.NS0aNq\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdc\", \"update_partition: Calling partprobe on prepared device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdc1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc2 ]; do echo '\\\\''Waiting for /dev/vdc2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc1 ]; do echo '\\\\''Waiting for /dev/vdc1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-09-21 12:28:50 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-09-21 12:28:50 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-09-21 12:28:50 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-09-21 12:28:50 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdc\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.pqI4DtXAXN' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.EDdRQpFCLF' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-09-21 12:28:50 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdc1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdc2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdc1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-09-21 12:28:50 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-09-21 12:28:50 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-09-21 12:28:50 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-09-21 12:28:50 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdc\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.pqI4DtXAXN' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.EDdRQpFCLF' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-09-21 12:28:50 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdc1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdc2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdc1' from root:disk to ceph:ceph\"]}\nchanged: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdd', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdd', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdd', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdd', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdd']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdd -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdd -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-12\", \"delta\": \"0:00:07.116273\", \"end\": \"2018-09-21 12:29:03.971219\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdd\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdd\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdd\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdd\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdd\"], \"rc\": 0, \"start\": \"2018-09-21 12:28:56.854946\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-09-21 12:28:57'\\n+common_functions.sh:13: log(): echo '2018-09-21 12:28:57 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdd ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdd ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdd print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 80ccd76c-7139-4f6b-8ec3-da3162342170 /dev/vdd\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdd\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdd\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:80ccd76c-7139-4f6b-8ec3-da3162342170 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdd\\nupdate_partition: Calling partprobe on created device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd2 uuid path is /sys/dev/block/252:50/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/80ccd76c-7139-4f6b-8ec3-da3162342170\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdd\\nupdate_partition: Calling partprobe on prepared device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/80ccd76c-7139-4f6b-8ec3-da3162342170\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdd\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdd\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:da7e3f87-d7d7-4944-a730-14ca919cd237 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdd\\nupdate_partition: Calling partprobe on created device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd1 uuid path is /sys/dev/block/252:49/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdd1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdd1\\nmount: Mounting /dev/vdd1 on /var/lib/ceph/tmp/mnt.KykOSf with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdd1 /var/lib/ceph/tmp/mnt.KykOSf\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.KykOSf\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.KykOSf\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf/ceph_fsid.19414.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf/ceph_fsid.19414.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf/fsid.19414.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf/fsid.19414.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf/magic.19414.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf/magic.19414.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf/journal_uuid.19414.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf/journal_uuid.19414.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.KykOSf/journal -> /dev/disk/by-partuuid/80ccd76c-7139-4f6b-8ec3-da3162342170\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf/type.19414.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf/type.19414.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.KykOSf\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.KykOSf\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdd\\nupdate_partition: Calling partprobe on prepared device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdd1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd2 ]; do echo '\\\\''Waiting for /dev/vdd2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd1 ]; do echo '\\\\''Waiting for /dev/vdd1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-09-21 12:28:57'\", \"+common_functions.sh:13: log(): echo '2018-09-21 12:28:57 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdd ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdd ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdd print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 80ccd76c-7139-4f6b-8ec3-da3162342170 /dev/vdd\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdd\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdd\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:80ccd76c-7139-4f6b-8ec3-da3162342170 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdd\", \"update_partition: Calling partprobe on created device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd2 uuid path is /sys/dev/block/252:50/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/80ccd76c-7139-4f6b-8ec3-da3162342170\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdd\", \"update_partition: Calling partprobe on prepared device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/80ccd76c-7139-4f6b-8ec3-da3162342170\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdd\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdd\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:da7e3f87-d7d7-4944-a730-14ca919cd237 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdd\", \"update_partition: Calling partprobe on created device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd1 uuid path is /sys/dev/block/252:49/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdd1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdd1\", \"mount: Mounting /dev/vdd1 on /var/lib/ceph/tmp/mnt.KykOSf with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdd1 /var/lib/ceph/tmp/mnt.KykOSf\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.KykOSf\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.KykOSf\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf/ceph_fsid.19414.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf/ceph_fsid.19414.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf/fsid.19414.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf/fsid.19414.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf/magic.19414.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf/magic.19414.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf/journal_uuid.19414.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf/journal_uuid.19414.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.KykOSf/journal -> /dev/disk/by-partuuid/80ccd76c-7139-4f6b-8ec3-da3162342170\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf/type.19414.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf/type.19414.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.KykOSf\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.KykOSf\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdd\", \"update_partition: Calling partprobe on prepared device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdd1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd2 ]; do echo '\\\\''Waiting for /dev/vdd2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd1 ]; do echo '\\\\''Waiting for /dev/vdd1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-09-21 12:28:57 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-09-21 12:28:57 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-09-21 12:28:57 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-09-21 12:28:57 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdd\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.pqI4DtXAXN' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.EDdRQpFCLF' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.YiDfSdGwqv' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-09-21 12:28:57 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdd1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdd2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdd1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-09-21 12:28:57 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-09-21 12:28:57 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-09-21 12:28:57 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-09-21 12:28:57 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdd\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.pqI4DtXAXN' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.EDdRQpFCLF' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.YiDfSdGwqv' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-09-21 12:28:57 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdd1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdd2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdd1' from root:disk to ceph:ceph\"]}\nchanged: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vde', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vde', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vde', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vde', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vde']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vde -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vde -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-12\", \"delta\": \"0:00:06.684474\", \"end\": \"2018-09-21 12:29:10.825789\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vde\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vde\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vde\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vde\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vde\"], \"rc\": 0, \"start\": \"2018-09-21 12:29:04.141315\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-09-21 12:29:04'\\n+common_functions.sh:13: log(): echo '2018-09-21 12:29:04 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vde ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vde ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vde print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 8f435e6d-db2f-4a09-b5f4-98704489f743 /dev/vde\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nset_type: Will colocate journal with data on /dev/vde\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vde\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:8f435e6d-db2f-4a09-b5f4-98704489f743 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vde\\nupdate_partition: Calling partprobe on created device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde2 uuid path is /sys/dev/block/252:66/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/8f435e6d-db2f-4a09-b5f4-98704489f743\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vde\\nupdate_partition: Calling partprobe on prepared device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/8f435e6d-db2f-4a09-b5f4-98704489f743\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vde\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vde\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:d07f48e0-7ddf-4490-8619-c5702168e946 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vde\\nupdate_partition: Calling partprobe on created device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde1 uuid path is /sys/dev/block/252:65/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vde1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vde1\\nmount: Mounting /dev/vde1 on /var/lib/ceph/tmp/mnt.Q1AShn with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vde1 /var/lib/ceph/tmp/mnt.Q1AShn\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.Q1AShn\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.Q1AShn\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn/ceph_fsid.19672.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn/ceph_fsid.19672.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn/fsid.19672.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn/fsid.19672.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn/magic.19672.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn/magic.19672.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn/journal_uuid.19672.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn/journal_uuid.19672.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.Q1AShn/journal -> /dev/disk/by-partuuid/8f435e6d-db2f-4a09-b5f4-98704489f743\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn/type.19672.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn/type.19672.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.Q1AShn\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.Q1AShn\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vde\\nupdate_partition: Calling partprobe on prepared device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vde1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde2 ]; do echo '\\\\''Waiting for /dev/vde2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde1 ]; do echo '\\\\''Waiting for /dev/vde1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-09-21 12:29:04'\", \"+common_functions.sh:13: log(): echo '2018-09-21 12:29:04 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vde ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vde ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vde print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 8f435e6d-db2f-4a09-b5f4-98704489f743 /dev/vde\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vde\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vde\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:8f435e6d-db2f-4a09-b5f4-98704489f743 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vde\", \"update_partition: Calling partprobe on created device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde2 uuid path is /sys/dev/block/252:66/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/8f435e6d-db2f-4a09-b5f4-98704489f743\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vde\", \"update_partition: Calling partprobe on prepared device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/8f435e6d-db2f-4a09-b5f4-98704489f743\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vde\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vde\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:d07f48e0-7ddf-4490-8619-c5702168e946 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vde\", \"update_partition: Calling partprobe on created device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde1 uuid path is /sys/dev/block/252:65/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vde1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vde1\", \"mount: Mounting /dev/vde1 on /var/lib/ceph/tmp/mnt.Q1AShn with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vde1 /var/lib/ceph/tmp/mnt.Q1AShn\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.Q1AShn\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.Q1AShn\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn/ceph_fsid.19672.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn/ceph_fsid.19672.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn/fsid.19672.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn/fsid.19672.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn/magic.19672.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn/magic.19672.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn/journal_uuid.19672.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn/journal_uuid.19672.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.Q1AShn/journal -> /dev/disk/by-partuuid/8f435e6d-db2f-4a09-b5f4-98704489f743\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn/type.19672.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn/type.19672.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.Q1AShn\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.Q1AShn\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vde\", \"update_partition: Calling partprobe on prepared device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vde1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde2 ]; do echo '\\\\''Waiting for /dev/vde2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde1 ]; do echo '\\\\''Waiting for /dev/vde1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-09-21 12:29:04 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-09-21 12:29:04 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-09-21 12:29:04 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-09-21 12:29:04 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vde\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.pqI4DtXAXN' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.EDdRQpFCLF' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.YiDfSdGwqv' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.q2MI4FHwGk' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-09-21 12:29:04 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vde1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vde2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vde1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-09-21 12:29:04 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-09-21 12:29:04 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-09-21 12:29:04 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-09-21 12:29:04 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vde\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.pqI4DtXAXN' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.EDdRQpFCLF' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.YiDfSdGwqv' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.q2MI4FHwGk' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-09-21 12:29:04 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vde1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vde2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vde1' from root:disk to ceph:ceph\"]}\nchanged: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdf', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdf', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdf', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdf', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdf']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdf -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdf -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-12\", \"delta\": \"0:00:06.991972\", \"end\": \"2018-09-21 12:29:17.997368\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdf\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdf\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdf\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdf\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdf\"], \"rc\": 0, \"start\": \"2018-09-21 12:29:11.005396\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-09-21 12:29:11'\\n+common_functions.sh:13: log(): echo '2018-09-21 12:29:11 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdf ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdf ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdf print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid f2cb922f-939c-4d73-8d3b-1a56c6c856b7 /dev/vdf\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdf\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdf\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:f2cb922f-939c-4d73-8d3b-1a56c6c856b7 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdf\\nupdate_partition: Calling partprobe on created device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf2 uuid path is /sys/dev/block/252:82/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/f2cb922f-939c-4d73-8d3b-1a56c6c856b7\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdf\\nupdate_partition: Calling partprobe on prepared device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/f2cb922f-939c-4d73-8d3b-1a56c6c856b7\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdf\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdf\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:9489d099-5c06-4e96-85f5-bb30642ff473 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdf\\nupdate_partition: Calling partprobe on created device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf1 uuid path is /sys/dev/block/252:81/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdf1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdf1\\nmount: Mounting /dev/vdf1 on /var/lib/ceph/tmp/mnt.wLZfUa with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdf1 /var/lib/ceph/tmp/mnt.wLZfUa\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.wLZfUa\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.wLZfUa\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa/ceph_fsid.19931.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa/ceph_fsid.19931.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa/fsid.19931.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa/fsid.19931.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa/magic.19931.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa/magic.19931.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa/journal_uuid.19931.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa/journal_uuid.19931.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.wLZfUa/journal -> /dev/disk/by-partuuid/f2cb922f-939c-4d73-8d3b-1a56c6c856b7\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa/type.19931.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa/type.19931.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.wLZfUa\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.wLZfUa\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdf\\nupdate_partition: Calling partprobe on prepared device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdf1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf2 ]; do echo '\\\\''Waiting for /dev/vdf2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf1 ]; do echo '\\\\''Waiting for /dev/vdf1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-09-21 12:29:11'\", \"+common_functions.sh:13: log(): echo '2018-09-21 12:29:11 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdf ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdf ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdf print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid f2cb922f-939c-4d73-8d3b-1a56c6c856b7 /dev/vdf\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdf\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdf\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:f2cb922f-939c-4d73-8d3b-1a56c6c856b7 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdf\", \"update_partition: Calling partprobe on created device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf2 uuid path is /sys/dev/block/252:82/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/f2cb922f-939c-4d73-8d3b-1a56c6c856b7\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdf\", \"update_partition: Calling partprobe on prepared device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/f2cb922f-939c-4d73-8d3b-1a56c6c856b7\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdf\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdf\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:9489d099-5c06-4e96-85f5-bb30642ff473 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdf\", \"update_partition: Calling partprobe on created device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf1 uuid path is /sys/dev/block/252:81/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdf1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdf1\", \"mount: Mounting /dev/vdf1 on /var/lib/ceph/tmp/mnt.wLZfUa with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdf1 /var/lib/ceph/tmp/mnt.wLZfUa\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.wLZfUa\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.wLZfUa\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa/ceph_fsid.19931.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa/ceph_fsid.19931.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa/fsid.19931.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa/fsid.19931.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa/magic.19931.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa/magic.19931.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa/journal_uuid.19931.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa/journal_uuid.19931.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.wLZfUa/journal -> /dev/disk/by-partuuid/f2cb922f-939c-4d73-8d3b-1a56c6c856b7\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa/type.19931.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa/type.19931.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.wLZfUa\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.wLZfUa\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdf\", \"update_partition: Calling partprobe on prepared device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdf1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf2 ]; do echo '\\\\''Waiting for /dev/vdf2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf1 ]; do echo '\\\\''Waiting for /dev/vdf1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-09-21 12:29:11 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-09-21 12:29:11 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-09-21 12:29:11 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-09-21 12:29:11 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdf\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.pqI4DtXAXN' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.EDdRQpFCLF' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.YiDfSdGwqv' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.q2MI4FHwGk' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.L5dDVJyOWZ' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-09-21 12:29:11 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdf1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdf2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdf1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-09-21 12:29:11 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-09-21 12:29:11 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-09-21 12:29:11 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-09-21 12:29:11 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdf\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.pqI4DtXAXN' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.EDdRQpFCLF' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.YiDfSdGwqv' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.q2MI4FHwGk' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.L5dDVJyOWZ' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-09-21 12:29:11 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdf1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdf2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdf1' from root:disk to ceph:ceph\"]}\n\nTASK [ceph-osd : automatic prepare ceph containerized osd disk collocated] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:30\nFriday 21 September 2018 08:29:18 -0400 (0:00:35.433) 0:03:00.173 ****** \nskipping: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"item\": \"/dev/vdb\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"item\": \"/dev/vdc\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"item\": \"/dev/vdd\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"item\": \"/dev/vde\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"item\": \"/dev/vdf\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : manually prepare ceph \"filestore\" non-containerized osd disk(s) with collocated osd data and journal] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:53\nFriday 21 September 2018 08:29:18 -0400 (0:00:00.076) 0:03:00.249 ****** \nskipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdb', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdb', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdb', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdb', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdb']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdb\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdb\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdb\"], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdc', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdc', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdc', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdc', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdc']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdc\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdc\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdc\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdc\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdc\"], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdd', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdd', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdd', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdd', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdd']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdd\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdd\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdd\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdd\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdd\"], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vde', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vde', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vde', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vde', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vde']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vde\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vde\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vde\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vde\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vde\"], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdf', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdf', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdf', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdf', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdf']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdf\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdf\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdf\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdf\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdf\"], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include scenarios/non-collocated.yml] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:48\nFriday 21 September 2018 08:29:18 -0400 (0:00:00.116) 0:03:00.365 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include scenarios/lvm.yml] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:56\nFriday 21 September 2018 08:29:18 -0400 (0:00:00.050) 0:03:00.415 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include activate_osds.yml] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:64\nFriday 21 September 2018 08:29:18 -0400 (0:00:00.044) 0:03:00.460 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include start_osds.yml] ***************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:72\nFriday 21 September 2018 08:29:18 -0400 (0:00:00.044) 0:03:00.504 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include docker/main.yml] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:80\nFriday 21 September 2018 08:29:18 -0400 (0:00:00.043) 0:03:00.548 ****** \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/main.yml for ceph-0\n\nTASK [ceph-osd : include start_docker_osd.yml] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/main.yml:2\nFriday 21 September 2018 08:29:18 -0400 (0:00:00.099) 0:03:00.648 ****** \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml for ceph-0\n\nTASK [ceph-osd : umount ceph disk (if on openstack)] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:4\nFriday 21 September 2018 08:29:18 -0400 (0:00:00.079) 0:03:00.727 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : test if the container image has the disk_list function] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:13\nFriday 21 September 2018 08:29:18 -0400 (0:00:00.053) 0:03:00.781 ****** \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint=stat\", \"192.168.24.1:8787/rhceph:3-12\", \"disk_list.sh\"], \"delta\": \"0:00:00.304343\", \"end\": \"2018-09-21 12:29:19.163492\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:29:18.859149\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \" File: 'disk_list.sh'\\n Size: 4074 \\tBlocks: 8 IO Block: 4096 regular file\\nDevice: 2ah/42d\\tInode: 25321704 Links: 1\\nAccess: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)\\nAccess: 2018-08-06 22:27:40.000000000 +0000\\nModify: 2018-08-06 22:27:40.000000000 +0000\\nChange: 2018-09-21 12:28:19.703239788 +0000\\n Birth: -\", \"stdout_lines\": [\" File: 'disk_list.sh'\", \" Size: 4074 \\tBlocks: 8 IO Block: 4096 regular file\", \"Device: 2ah/42d\\tInode: 25321704 Links: 1\", \"Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)\", \"Access: 2018-08-06 22:27:40.000000000 +0000\", \"Modify: 2018-08-06 22:27:40.000000000 +0000\", \"Change: 2018-09-21 12:28:19.703239788 +0000\", \" Birth: -\"]}\n\nTASK [ceph-osd : generate ceph osd docker run script] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:19\nFriday 21 September 2018 08:29:19 -0400 (0:00:00.538) 0:03:01.319 ****** \nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"5542e950125b3dbd25e146575a148538f90dc2a6\", \"dest\": \"/usr/share/ceph-osd-run.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"81913dc490826e0e8f21ed305bd0867e\", \"mode\": \"0744\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:usr_t:s0\", \"size\": 964, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532959.25-114700994477229/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-osd : generate systemd unit file] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:30\nFriday 21 September 2018 08:29:20 -0400 (0:00:00.878) 0:03:02.197 ****** \nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"b7abfb86a4af8d6e54d349965cae96bf9b995c49\", \"dest\": \"/etc/systemd/system/ceph-osd@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"8a53f95e6590750e7c4807589dd5864c\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 496, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532960.27-84518264433441/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-osd : systemd start osd container] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:41\nFriday 21 September 2018 08:29:21 -0400 (0:00:01.066) 0:03:03.264 ****** \nchanged: [ceph-0] => (item=/dev/vdb) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdb\", \"name\": \"ceph-osd@vdb\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"systemd-journald.socket docker.service system-ceph\\\\x5cx2dosd.slice basic.target\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdb.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"22973\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"22973\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdb.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\nchanged: [ceph-0] => (item=/dev/vdc) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdc\", \"name\": \"ceph-osd@vdc\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"basic.target system-ceph\\\\x5cx2dosd.slice docker.service systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdc.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"22973\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"22973\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdc.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\nchanged: [ceph-0] => (item=/dev/vdd) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdd\", \"name\": \"ceph-osd@vdd\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"basic.target system-ceph\\\\x5cx2dosd.slice docker.service systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdd.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"22973\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"22973\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdd.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\nchanged: [ceph-0] => (item=/dev/vde) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vde\", \"name\": \"ceph-osd@vde\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"docker.service systemd-journald.socket basic.target system-ceph\\\\x5cx2dosd.slice\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vde.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"22973\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"22973\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vde.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\nchanged: [ceph-0] => (item=/dev/vdf) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdf\", \"name\": \"ceph-osd@vdf\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"system-ceph\\\\x5cx2dosd.slice systemd-journald.socket docker.service basic.target\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdf.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"22973\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"22973\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdf.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\n\nTASK [ceph-osd : set_fact openstack_keys_tmp - preserve backward compatibility after the introduction of the ceph_keys module] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:87\nFriday 21 September 2018 08:29:24 -0400 (0:00:03.074) 0:03:06.339 ****** \nskipping: [ceph-0] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==', u'name': u'client.openstack'}) => {\"changed\": false, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==', u'name': u'client.manila'}) => {\"changed\": false, \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==', u'name': u'client.radosgw'}) => {\"changed\": false, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact keys - override keys_tmp with keys] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:95\nFriday 21 September 2018 08:29:24 -0400 (0:00:00.071) 0:03:06.410 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : wait for all osd to be up] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:2\nFriday 21 September 2018 08:29:24 -0400 (0:00:00.069) 0:03:06.479 ****** \nchanged: [ceph-0 -> 192.168.24.18] => {\"attempts\": 1, \"changed\": true, \"cmd\": \"test \\\"$(docker exec ceph-mon-controller-0 ceph --cluster ceph -s -f json | python -c 'import sys, json; print(json.load(sys.stdin)[\\\"osdmap\\\"][\\\"osdmap\\\"][\\\"num_osds\\\"])')\\\" = \\\"$(docker exec ceph-mon-controller-0 ceph --cluster ceph -s -f json | python -c 'import sys, json; print(json.load(sys.stdin)[\\\"osdmap\\\"][\\\"osdmap\\\"][\\\"num_up_osds\\\"])')\\\"\", \"delta\": \"0:00:00.807354\", \"end\": \"2018-09-21 12:29:25.469869\", \"rc\": 0, \"start\": \"2018-09-21 12:29:24.662515\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : list existing pool(s)] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:12\nFriday 21 September 2018 08:29:25 -0400 (0:00:01.194) 0:03:07.674 ****** \nchanged: [ceph-0 -> 192.168.24.18] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"images\", \"size\"], \"delta\": \"0:00:00.373530\", \"end\": \"2018-09-21 12:29:26.209812\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:29:25.836282\", \"stderr\": \"Error ENOENT: unrecognized pool 'images'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.18] => (item={u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"metrics\", \"size\"], \"delta\": \"0:00:00.395786\", \"end\": \"2018-09-21 12:29:26.822972\", \"failed_when_result\": false, \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:29:26.427186\", \"stderr\": \"Error ENOENT: unrecognized pool 'metrics'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.18] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"backups\", \"size\"], \"delta\": \"0:00:00.346406\", \"end\": \"2018-09-21 12:29:27.374763\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:29:27.028357\", \"stderr\": \"Error ENOENT: unrecognized pool 'backups'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.18] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"vms\", \"size\"], \"delta\": \"0:00:00.341163\", \"end\": \"2018-09-21 12:29:27.916597\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:29:27.575434\", \"stderr\": \"Error ENOENT: unrecognized pool 'vms'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.18] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"volumes\", \"size\"], \"delta\": \"0:00:00.378386\", \"end\": \"2018-09-21 12:29:28.507274\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:29:28.128888\", \"stderr\": \"Error ENOENT: unrecognized pool 'volumes'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : set_fact rule_name before luminous] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:21\nFriday 21 September 2018 08:29:28 -0400 (0:00:02.999) 0:03:10.674 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact rule_name from luminous] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:28\nFriday 21 September 2018 08:29:28 -0400 (0:00:00.057) 0:03:10.731 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"rule_name\": \"replicated_rule\"}, \"changed\": false}\n\nTASK [ceph-osd : create openstack pool(s)] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:35\nFriday 21 September 2018 08:29:28 -0400 (0:00:00.137) 0:03:10.868 ****** \nok: [ceph-0 -> 192.168.24.18] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u'replicated_rule'}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'images'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'images', u'size'], u'end': u'2018-09-21 12:29:26.209812', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.18'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get images size', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, u'stdout': u'', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u'replicated_rule'}, u'delta': u'0:00:00.373530', '_ansible_item_label': {u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u'replicated_rule'}, u'stderr': u\"Error ENOENT: unrecognized pool 'images'\", u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-09-21 12:29:25.836282', '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"images\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:01.252598\", \"end\": \"2018-09-21 12:29:30.267578\", \"item\": [{\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.18\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"images\", \"size\"], \"delta\": \"0:00:00.373530\", \"end\": \"2018-09-21 12:29:26.209812\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get images size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:29:25.836282\", \"stderr\": \"Error ENOENT: unrecognized pool 'images'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-09-21 12:29:29.014980\", \"stderr\": \"pool 'images' created\", \"stderr_lines\": [\"pool 'images' created\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.18] => (item=[{u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u'replicated_rule'}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'metrics'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'metrics', u'size'], u'end': u'2018-09-21 12:29:26.822972', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.18'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get metrics size', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, u'stdout': u'', 'item': {u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u'replicated_rule'}, u'delta': u'0:00:00.395786', '_ansible_item_label': {u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u'replicated_rule'}, u'stderr': u\"Error ENOENT: unrecognized pool 'metrics'\", u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-09-21 12:29:26.427186', '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"metrics\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:00.955150\", \"end\": \"2018-09-21 12:29:31.438680\", \"item\": [{\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.18\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"metrics\", \"size\"], \"delta\": \"0:00:00.395786\", \"end\": \"2018-09-21 12:29:26.822972\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get metrics size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:29:26.427186\", \"stderr\": \"Error ENOENT: unrecognized pool 'metrics'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-09-21 12:29:30.483530\", \"stderr\": \"pool 'metrics' created\", \"stderr_lines\": [\"pool 'metrics' created\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.18] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u'replicated_rule'}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'backups'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'backups', u'size'], u'end': u'2018-09-21 12:29:27.374763', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.18'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get backups size', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, u'stdout': u'', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u'replicated_rule'}, u'delta': u'0:00:00.346406', '_ansible_item_label': {u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u'replicated_rule'}, u'stderr': u\"Error ENOENT: unrecognized pool 'backups'\", u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-09-21 12:29:27.028357', '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"backups\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:00.989689\", \"end\": \"2018-09-21 12:29:32.635602\", \"item\": [{\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.18\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"backups\", \"size\"], \"delta\": \"0:00:00.346406\", \"end\": \"2018-09-21 12:29:27.374763\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get backups size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:29:27.028357\", \"stderr\": \"Error ENOENT: unrecognized pool 'backups'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-09-21 12:29:31.645913\", \"stderr\": \"pool 'backups' created\", \"stderr_lines\": [\"pool 'backups' created\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.18] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u'replicated_rule'}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'vms'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'vms', u'size'], u'end': u'2018-09-21 12:29:27.916597', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.18'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get vms size', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, u'stdout': u'', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u'replicated_rule'}, u'delta': u'0:00:00.341163', '_ansible_item_label': {u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u'replicated_rule'}, u'stderr': u\"Error ENOENT: unrecognized pool 'vms'\", u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-09-21 12:29:27.575434', '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"vms\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:01.094027\", \"end\": \"2018-09-21 12:29:33.973841\", \"item\": [{\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.18\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"vms\", \"size\"], \"delta\": \"0:00:00.341163\", \"end\": \"2018-09-21 12:29:27.916597\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get vms size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:29:27.575434\", \"stderr\": \"Error ENOENT: unrecognized pool 'vms'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-09-21 12:29:32.879814\", \"stderr\": \"pool 'vms' created\", \"stderr_lines\": [\"pool 'vms' created\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.18] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u'replicated_rule'}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'volumes'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'volumes', u'size'], u'end': u'2018-09-21 12:29:28.507274', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.18'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get volumes size', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, u'stdout': u'', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u'replicated_rule'}, u'delta': u'0:00:00.378386', '_ansible_item_label': {u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u'replicated_rule'}, u'stderr': u\"Error ENOENT: unrecognized pool 'volumes'\", u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-09-21 12:29:28.128888', '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"volumes\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:01.135967\", \"end\": \"2018-09-21 12:29:35.334959\", \"item\": [{\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.18\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"volumes\", \"size\"], \"delta\": \"0:00:00.378386\", \"end\": \"2018-09-21 12:29:28.507274\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get volumes size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:29:28.128888\", \"stderr\": \"Error ENOENT: unrecognized pool 'volumes'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-09-21 12:29:34.198992\", \"stderr\": \"pool 'volumes' created\", \"stderr_lines\": [\"pool 'volumes' created\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : assign application to pool(s)] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:55\nFriday 21 September 2018 08:29:35 -0400 (0:00:06.695) 0:03:17.564 ****** \nok: [ceph-0 -> 192.168.24.18] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u'replicated_rule'}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"images\", \"rbd\"], \"delta\": \"0:00:00.783800\", \"end\": \"2018-09-21 12:29:36.499330\", \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"rc\": 0, \"start\": \"2018-09-21 12:29:35.715530\", \"stderr\": \"enabled application 'rbd' on pool 'images'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.18] => (item={u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u'replicated_rule'}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"metrics\", \"openstack_gnocchi\"], \"delta\": \"0:00:00.787285\", \"end\": \"2018-09-21 12:29:37.507380\", \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"rc\": 0, \"start\": \"2018-09-21 12:29:36.720095\", \"stderr\": \"enabled application 'openstack_gnocchi' on pool 'metrics'\", \"stderr_lines\": [\"enabled application 'openstack_gnocchi' on pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.18] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u'replicated_rule'}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"backups\", \"rbd\"], \"delta\": \"0:00:00.797293\", \"end\": \"2018-09-21 12:29:38.503833\", \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"rc\": 0, \"start\": \"2018-09-21 12:29:37.706540\", \"stderr\": \"enabled application 'rbd' on pool 'backups'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.18] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u'replicated_rule'}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"vms\", \"rbd\"], \"delta\": \"0:00:00.799475\", \"end\": \"2018-09-21 12:29:39.498919\", \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"rc\": 0, \"start\": \"2018-09-21 12:29:38.699444\", \"stderr\": \"enabled application 'rbd' on pool 'vms'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.18] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u'replicated_rule'}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"volumes\", \"rbd\"], \"delta\": \"0:00:00.805109\", \"end\": \"2018-09-21 12:29:40.523665\", \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"rc\": 0, \"start\": \"2018-09-21 12:29:39.718556\", \"stderr\": \"enabled application 'rbd' on pool 'volumes'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : create openstack cephx key(s)] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:64\nFriday 21 September 2018 08:29:40 -0400 (0:00:05.152) 0:03:22.716 ****** \nchanged: [ceph-0 -> 192.168.24.18] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==', u'name': u'client.openstack'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.openstack.keyring\"], \"delta\": \"0:00:00.825822\", \"end\": \"2018-09-21 12:29:41.920206\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"rc\": 0, \"start\": \"2018-09-21 12:29:41.094384\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.18] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==', u'name': u'client.manila'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.manila.keyring\"], \"delta\": \"0:00:00.890275\", \"end\": \"2018-09-21 12:29:43.020506\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"rc\": 0, \"start\": \"2018-09-21 12:29:42.130231\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.18] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==', u'name': u'client.radosgw'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.radosgw.keyring\"], \"delta\": \"0:00:00.835276\", \"end\": \"2018-09-21 12:29:44.057574\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"rc\": 0, \"start\": \"2018-09-21 12:29:43.222298\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : fetch openstack cephx key(s)] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:77\nFriday 21 September 2018 08:29:44 -0400 (0:00:03.530) 0:03:26.247 ****** \nchanged: [ceph-0 -> 192.168.24.18] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==', u'name': u'client.openstack'}) => {\"changed\": true, \"checksum\": \"40ed8b50cf9c2c93b1fd620a66672adaecbdd5ae\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59/etc/ceph/ceph.client.openstack.keyring\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"md5sum\": \"a6757c87664e50e0fa2a4a0c24ffa2db\", \"remote_checksum\": \"40ed8b50cf9c2c93b1fd620a66672adaecbdd5ae\", \"remote_md5sum\": null}\nchanged: [ceph-0 -> 192.168.24.18] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==', u'name': u'client.manila'}) => {\"changed\": true, \"checksum\": \"e119bc7d0367829cffba7f254fed5c0f7663e7a7\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59/etc/ceph/ceph.client.manila.keyring\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"md5sum\": \"d42eb2e49e090ff13248fba0db5c0a6f\", \"remote_checksum\": \"e119bc7d0367829cffba7f254fed5c0f7663e7a7\", \"remote_md5sum\": null}\nchanged: [ceph-0 -> 192.168.24.18] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==', u'name': u'client.radosgw'}) => {\"changed\": true, \"checksum\": \"32018e3d91a7d0c0ff43f9db5459f66424dd1f38\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59/etc/ceph/ceph.client.radosgw.keyring\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"md5sum\": \"18e4740c17d5c0f4ef7090358897cb02\", \"remote_checksum\": \"32018e3d91a7d0c0ff43f9db5459f66424dd1f38\", \"remote_md5sum\": null}\n\nTASK [ceph-osd : copy to other mons the openstack cephx key(s)] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:85\nFriday 21 September 2018 08:29:44 -0400 (0:00:00.606) 0:03:26.853 ****** \nchanged: [ceph-0 -> 192.168.24.18] => (item=[u'controller-0', {u'name': u'client.openstack', u'mode': u'0600', u'key': u'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==', u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}}]) => {\"changed\": true, \"checksum\": \"40ed8b50cf9c2c93b1fd620a66672adaecbdd5ae\", \"dest\": \"/etc/ceph/ceph.client.openstack.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.openstack.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 253, \"state\": \"file\", \"uid\": 167}\nchanged: [ceph-0 -> 192.168.24.18] => (item=[u'controller-0', {u'name': u'client.manila', u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==', u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", u'mgr': u'allow *'}}]) => {\"changed\": true, \"checksum\": \"e119bc7d0367829cffba7f254fed5c0f7663e7a7\", \"dest\": \"/etc/ceph/ceph.client.manila.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==\", \"mode\": \"0600\", \"name\": \"client.manila\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.manila.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 268, \"state\": \"file\", \"uid\": 167}\nchanged: [ceph-0 -> 192.168.24.18] => (item=[u'controller-0', {u'name': u'client.radosgw', u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==', u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}}]) => {\"changed\": true, \"checksum\": \"32018e3d91a7d0c0ff43f9db5459f66424dd1f38\", \"dest\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 134, \"state\": \"file\", \"uid\": 167}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******\nFriday 21 September 2018 08:29:46 -0400 (0:00:01.276) 0:03:28.130 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mon restart script] **********************\nFriday 21 September 2018 08:29:46 -0400 (0:00:00.195) 0:03:28.325 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***\nFriday 21 September 2018 08:29:46 -0400 (0:00:00.045) 0:03:28.370 ****** \nskipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******\nFriday 21 September 2018 08:29:46 -0400 (0:00:00.083) 0:03:28.454 ****** \nskipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********\nFriday 21 September 2018 08:29:46 -0400 (0:00:00.080) 0:03:28.534 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******\nFriday 21 September 2018 08:29:46 -0400 (0:00:00.202) 0:03:28.736 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy osd restart script] **********************\nFriday 21 September 2018 08:29:46 -0400 (0:00:00.194) 0:03:28.931 ****** \nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"6631c34a339c45ab1081b01015293e952e36893e\", \"dest\": \"/tmp/restart_osd_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"308c89936c25e77f74e78c1e4905ee1a\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_tmp_t:s0\", \"size\": 3081, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532987.03-152820220133009/source\", \"state\": \"file\", \"uid\": 0}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***\nFriday 21 September 2018 08:29:47 -0400 (0:00:00.652) 0:03:29.583 ****** \nskipping: [ceph-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******\nFriday 21 September 2018 08:29:47 -0400 (0:00:00.094) 0:03:29.678 ****** \nskipping: [ceph-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********\nFriday 21 September 2018 08:29:47 -0400 (0:00:00.104) 0:03:29.783 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******\nFriday 21 September 2018 08:29:47 -0400 (0:00:00.203) 0:03:29.986 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mds restart script] **********************\nFriday 21 September 2018 08:29:48 -0400 (0:00:00.204) 0:03:30.191 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***\nFriday 21 September 2018 08:29:48 -0400 (0:00:00.045) 0:03:30.236 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******\nFriday 21 September 2018 08:29:48 -0400 (0:00:00.049) 0:03:30.286 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********\nFriday 21 September 2018 08:29:48 -0400 (0:00:00.048) 0:03:30.335 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******\nFriday 21 September 2018 08:29:48 -0400 (0:00:00.209) 0:03:30.544 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************\nFriday 21 September 2018 08:29:48 -0400 (0:00:00.205) 0:03:30.749 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***\nFriday 21 September 2018 08:29:48 -0400 (0:00:00.049) 0:03:30.799 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******\nFriday 21 September 2018 08:29:48 -0400 (0:00:00.057) 0:03:30.856 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********\nFriday 21 September 2018 08:29:48 -0400 (0:00:00.059) 0:03:30.916 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***\nFriday 21 September 2018 08:29:48 -0400 (0:00:00.188) 0:03:31.104 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************\nFriday 21 September 2018 08:29:49 -0400 (0:00:00.196) 0:03:31.300 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***\nFriday 21 September 2018 08:29:49 -0400 (0:00:00.134) 0:03:31.435 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***\nFriday 21 September 2018 08:29:49 -0400 (0:00:00.055) 0:03:31.491 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***\nFriday 21 September 2018 08:29:49 -0400 (0:00:00.052) 0:03:31.543 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******\nFriday 21 September 2018 08:29:49 -0400 (0:00:00.073) 0:03:31.616 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************\nFriday 21 September 2018 08:29:49 -0400 (0:00:00.076) 0:03:31.693 ****** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***\nFriday 21 September 2018 08:29:49 -0400 (0:00:00.048) 0:03:31.742 ****** \nskipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******\nFriday 21 September 2018 08:29:49 -0400 (0:00:00.102) 0:03:31.845 ****** \nskipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********\nFriday 21 September 2018 08:29:49 -0400 (0:00:00.091) 0:03:31.936 ****** \nok: [ceph-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [set ceph osd install 'Complete'] *****************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:157\nFriday 21 September 2018 08:29:49 -0400 (0:00:00.101) 0:03:32.038 ****** \nok: [ceph-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_osd\": {\"end\": \"20180921082949Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nPLAY [mdss] ********************************************************************\nskipping: no hosts matched\n\nPLAY [rgws] ********************************************************************\nskipping: no hosts matched\n\nPLAY [nfss] ********************************************************************\nskipping: no hosts matched\n\nPLAY [rbdmirrors] **************************************************************\nskipping: no hosts matched\n\nPLAY [restapis] ****************************************************************\nskipping: no hosts matched\n\nPLAY [clients] *****************************************************************\n\nTASK [set ceph client install 'In Progress'] ***********************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:308\nFriday 21 September 2018 08:29:50 -0400 (0:00:00.174) 0:03:32.212 ****** \nok: [compute-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_client\": {\"start\": \"20180921082950Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [ceph-defaults : check for a mon container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2\nFriday 21 September 2018 08:29:50 -0400 (0:00:00.087) 0:03:32.299 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for an osd container] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11\nFriday 21 September 2018 08:29:50 -0400 (0:00:00.047) 0:03:32.347 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mds container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20\nFriday 21 September 2018 08:29:50 -0400 (0:00:00.049) 0:03:32.396 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rgw container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29\nFriday 21 September 2018 08:29:50 -0400 (0:00:00.047) 0:03:32.444 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mgr container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38\nFriday 21 September 2018 08:29:50 -0400 (0:00:00.048) 0:03:32.492 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rbd mirror container] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47\nFriday 21 September 2018 08:29:50 -0400 (0:00:00.048) 0:03:32.540 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a nfs container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56\nFriday 21 September 2018 08:29:50 -0400 (0:00:00.049) 0:03:32.589 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mon socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2\nFriday 21 September 2018 08:29:50 -0400 (0:00:00.051) 0:03:32.641 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mon socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11\nFriday 21 September 2018 08:29:50 -0400 (0:00:00.055) 0:03:32.697 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21\nFriday 21 September 2018 08:29:50 -0400 (0:00:00.048) 0:03:32.745 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph osd socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30\nFriday 21 September 2018 08:29:50 -0400 (0:00:00.049) 0:03:32.795 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph osd socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40\nFriday 21 September 2018 08:29:50 -0400 (0:00:00.049) 0:03:32.844 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50\nFriday 21 September 2018 08:29:50 -0400 (0:00:00.046) 0:03:32.890 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mds socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59\nFriday 21 September 2018 08:29:50 -0400 (0:00:00.045) 0:03:32.936 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mds socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69\nFriday 21 September 2018 08:29:50 -0400 (0:00:00.047) 0:03:32.983 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79\nFriday 21 September 2018 08:29:50 -0400 (0:00:00.057) 0:03:33.040 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rgw socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88\nFriday 21 September 2018 08:29:50 -0400 (0:00:00.050) 0:03:33.091 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98\nFriday 21 September 2018 08:29:51 -0400 (0:00:00.053) 0:03:33.145 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108\nFriday 21 September 2018 08:29:51 -0400 (0:00:00.048) 0:03:33.193 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mgr socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117\nFriday 21 September 2018 08:29:51 -0400 (0:00:00.044) 0:03:33.237 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127\nFriday 21 September 2018 08:29:51 -0400 (0:00:00.043) 0:03:33.281 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137\nFriday 21 September 2018 08:29:51 -0400 (0:00:00.048) 0:03:33.330 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rbd mirror socket] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146\nFriday 21 September 2018 08:29:51 -0400 (0:00:00.043) 0:03:33.373 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156\nFriday 21 September 2018 08:29:51 -0400 (0:00:00.040) 0:03:33.414 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166\nFriday 21 September 2018 08:29:51 -0400 (0:00:00.045) 0:03:33.459 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175\nFriday 21 September 2018 08:29:51 -0400 (0:00:00.048) 0:03:33.507 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184\nFriday 21 September 2018 08:29:51 -0400 (0:00:00.046) 0:03:33.554 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194\nFriday 21 September 2018 08:29:51 -0400 (0:00:00.048) 0:03:33.602 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if it is atomic host] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2\nFriday 21 September 2018 08:29:51 -0400 (0:00:00.047) 0:03:33.650 ****** \nok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact is_atomic] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7\nFriday 21 September 2018 08:29:51 -0400 (0:00:00.233) 0:03:33.884 ****** \nok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11\nFriday 21 September 2018 08:29:51 -0400 (0:00:00.069) 0:03:33.953 ****** \nok: [compute-0] => {\"ansible_facts\": {\"monitor_name\": \"compute-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17\nFriday 21 September 2018 08:29:51 -0400 (0:00:00.077) 0:03:34.031 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact docker_exec_cmd] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23\nFriday 21 September 2018 08:29:51 -0400 (0:00:00.068) 0:03:34.099 ****** \nok: [compute-0 -> 192.168.24.18] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : is ceph running already?] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34\nFriday 21 September 2018 08:29:52 -0400 (0:00:00.151) 0:03:34.251 ****** \nok: [compute-0 -> 192.168.24.18] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"-s\", \"-f\", \"json\"], \"delta\": \"0:00:00.331806\", \"end\": \"2018-09-21 12:29:52.679307\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:29:52.347501\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\\n{\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"modified\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"created\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.16:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.16:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":17,\\\"num_osds\\\":5,\\\"num_up_osds\\\":5,\\\"num_in_osds\\\":5,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[{\\\"state_name\\\":\\\"active+clean\\\",\\\"count\\\":160}],\\\"num_pgs\\\":160,\\\"num_pools\\\":5,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":564191232,\\\"bytes_avail\\\":55749480448,\\\"bytes_total\\\":56313671680},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":7,\\\"active_gid\\\":4104,\\\"active_name\\\":\\\"controller-0\\\",\\\"active_addr\\\":\\\"172.17.3.16:6800/79\\\",\\\"available\\\":true,\\\"standbys\\\":[],\\\"modules\\\":[\\\"status\\\"],\\\"available_modules\\\":[\\\"balancer\\\",\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"restful\\\",\\\"selftest\\\",\\\"status\\\",\\\"zabbix\\\"],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\", \"stdout_lines\": [\"\", \"{\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"modified\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"created\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.16:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.16:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":17,\\\"num_osds\\\":5,\\\"num_up_osds\\\":5,\\\"num_in_osds\\\":5,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[{\\\"state_name\\\":\\\"active+clean\\\",\\\"count\\\":160}],\\\"num_pgs\\\":160,\\\"num_pools\\\":5,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":564191232,\\\"bytes_avail\\\":55749480448,\\\"bytes_total\\\":56313671680},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":7,\\\"active_gid\\\":4104,\\\"active_name\\\":\\\"controller-0\\\",\\\"active_addr\\\":\\\"172.17.3.16:6800/79\\\",\\\"available\\\":true,\\\"standbys\\\":[],\\\"modules\\\":[\\\"status\\\"],\\\"available_modules\\\":[\\\"balancer\\\",\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"restful\\\",\\\"selftest\\\",\\\"status\\\",\\\"zabbix\\\"],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\"]}\n\nTASK [ceph-defaults : check if /var/lib/mistral/overcloud/ceph-ansible/fetch_dir directory exists] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47\nFriday 21 September 2018 08:29:52 -0400 (0:00:00.600) 0:03:34.851 ****** \nok: [compute-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact ceph_current_status rc 1] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57\nFriday 21 September 2018 08:29:52 -0400 (0:00:00.214) 0:03:35.066 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : create a local fetch directory if it does not exist] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64\nFriday 21 September 2018 08:29:53 -0400 (0:00:00.055) 0:03:35.121 ****** \nok: [compute-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 80, \"state\": \"directory\", \"uid\": 42430}\n\nTASK [ceph-defaults : set_fact ceph_current_status (convert to json)] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74\nFriday 21 September 2018 08:29:53 -0400 (0:00:00.186) 0:03:35.308 ****** \nok: [compute-0] => {\"ansible_facts\": {\"ceph_current_status\": {\"election_epoch\": 3, \"fsid\": \"8fedf068-bd95-11e8-ba69-5254006eda59\", \"fsmap\": {\"by_rank\": [], \"epoch\": 1}, \"health\": {\"checks\": {}, \"overall_status\": \"HEALTH_WARN\", \"status\": \"HEALTH_OK\", \"summary\": [{\"severity\": \"HEALTH_WARN\", \"summary\": \"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\"}]}, \"mgrmap\": {\"active_addr\": \"172.17.3.16:6800/79\", \"active_gid\": 4104, \"active_name\": \"controller-0\", \"available\": true, \"available_modules\": [\"balancer\", \"dashboard\", \"influx\", \"localpool\", \"prometheus\", \"restful\", \"selftest\", \"status\", \"zabbix\"], \"epoch\": 7, \"modules\": [\"status\"], \"services\": {}, \"standbys\": []}, \"monmap\": {\"created\": \"2018-09-21 12:27:11.445099\", \"epoch\": 1, \"features\": {\"optional\": [], \"persistent\": [\"kraken\", \"luminous\"]}, \"fsid\": \"8fedf068-bd95-11e8-ba69-5254006eda59\", \"modified\": \"2018-09-21 12:27:11.445099\", \"mons\": [{\"addr\": \"172.17.3.16:6789/0\", \"name\": \"controller-0\", \"public_addr\": \"172.17.3.16:6789/0\", \"rank\": 0}]}, \"osdmap\": {\"osdmap\": {\"epoch\": 17, \"full\": false, \"nearfull\": false, \"num_in_osds\": 5, \"num_osds\": 5, \"num_remapped_pgs\": 0, \"num_up_osds\": 5}}, \"pgmap\": {\"bytes_avail\": 55749480448, \"bytes_total\": 56313671680, \"bytes_used\": 564191232, \"data_bytes\": 0, \"num_objects\": 0, \"num_pgs\": 160, \"num_pools\": 5, \"pgs_by_state\": [{\"count\": 160, \"state_name\": \"active+clean\"}]}, \"quorum\": [0], \"quorum_names\": [\"controller-0\"], \"servicemap\": {\"epoch\": 1, \"modified\": \"0.000000\", \"services\": {}}}}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact fsid from ceph_current_status] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81\nFriday 21 September 2018 08:29:53 -0400 (0:00:00.083) 0:03:35.392 ****** \nok: [compute-0] => {\"ansible_facts\": {\"fsid\": \"8fedf068-bd95-11e8-ba69-5254006eda59\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:88\nFriday 21 September 2018 08:29:53 -0400 (0:00:00.078) 0:03:35.470 ****** \nok: [compute-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}\n\nTASK [ceph-defaults : generate cluster fsid] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:92\nFriday 21 September 2018 08:29:53 -0400 (0:00:00.210) 0:03:35.680 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:103\nFriday 21 September 2018 08:29:53 -0400 (0:00:00.049) 0:03:35.730 ****** \nok: [compute-0 -> localhost] => {\"changed\": false, \"cmd\": \"echo 8fedf068-bd95-11e8-ba69-5254006eda59 | tee /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"rc\": 0, \"stdout\": \"skipped, since /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\", \"stdout_lines\": [\"skipped, since /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\"]}\n\nTASK [ceph-defaults : read cluster fsid if it already exists] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:112\nFriday 21 September 2018 08:29:53 -0400 (0:00:00.220) 0:03:35.951 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:124\nFriday 21 September 2018 08:29:53 -0400 (0:00:00.043) 0:03:35.995 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:130\nFriday 21 September 2018 08:29:53 -0400 (0:00:00.044) 0:03:36.039 ****** \nok: [compute-0] => {\"ansible_facts\": {\"mds_name\": \"compute-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:136\nFriday 21 September 2018 08:29:54 -0400 (0:00:00.217) 0:03:36.256 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142\nFriday 21 September 2018 08:29:54 -0400 (0:00:00.047) 0:03:36.304 ****** \nok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149\nFriday 21 September 2018 08:29:54 -0400 (0:00:00.222) 0:03:36.527 ****** \nok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156\nFriday 21 September 2018 08:29:54 -0400 (0:00:00.215) 0:03:36.742 ****** \nok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}\n\nTASK [ceph-defaults : resolve device link(s)] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:163\nFriday 21 September 2018 08:29:54 -0400 (0:00:00.202) 0:03:36.945 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:173\nFriday 21 September 2018 08:29:54 -0400 (0:00:00.054) 0:03:36.999 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build final devices list] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:182\nFriday 21 September 2018 08:29:54 -0400 (0:00:00.057) 0:03:37.057 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190\nFriday 21 September 2018 08:29:54 -0400 (0:00:00.048) 0:03:37.106 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197\nFriday 21 September 2018 08:29:55 -0400 (0:00:00.047) 0:03:37.153 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204\nFriday 21 September 2018 08:29:55 -0400 (0:00:00.047) 0:03:37.201 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211\nFriday 21 September 2018 08:29:55 -0400 (0:00:00.047) 0:03:37.248 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:218\nFriday 21 September 2018 08:29:55 -0400 (0:00:00.048) 0:03:37.296 ****** \nok: [compute-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rgw_hostname - fqdn] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:225\nFriday 21 September 2018 08:29:55 -0400 (0:00:00.200) 0:03:37.497 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rgw_hostname - no fqdn] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:235\nFriday 21 September 2018 08:29:55 -0400 (0:00:00.046) 0:03:37.543 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_directories] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2\nFriday 21 September 2018 08:29:55 -0400 (0:00:00.048) 0:03:37.591 ****** \nok: [compute-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}\n\nTASK [ceph-defaults : create ceph initial directories] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18\nFriday 21 September 2018 08:29:55 -0400 (0:00:00.188) 0:03:37.780 ****** \nchanged: [compute-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-docker-common : fail if systemd is not present] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2\nFriday 21 September 2018 08:29:57 -0400 (0:00:02.232) 0:03:40.013 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2\nFriday 21 September 2018 08:29:57 -0400 (0:00:00.052) 0:03:40.065 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11\nFriday 21 September 2018 08:29:58 -0400 (0:00:00.052) 0:03:40.118 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : warning deprecation for fqdn configuration] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:20\nFriday 21 September 2018 08:29:58 -0400 (0:00:00.056) 0:03:40.174 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove ceph udev rules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2\nFriday 21 September 2018 08:29:58 -0400 (0:00:00.052) 0:03:40.226 ****** \nok: [compute-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}\nok: [compute-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14\nFriday 21 September 2018 08:29:58 -0400 (0:00:00.439) 0:03:40.666 ****** \nok: [compute-0] => {\"ansible_facts\": {\"monitor_name\": \"compute-0\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20\nFriday 21 September 2018 08:29:58 -0400 (0:00:00.095) 0:03:40.761 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get docker version] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26\nFriday 21 September 2018 08:29:58 -0400 (0:00:00.055) 0:03:40.817 ****** \nok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.029330\", \"end\": \"2018-09-21 12:29:58.943968\", \"rc\": 0, \"start\": \"2018-09-21 12:29:58.914638\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 6e3bb8e/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 6e3bb8e/1.13.1\"]}\n\nTASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32\nFriday 21 September 2018 08:29:58 -0400 (0:00:00.279) 0:03:41.097 ****** \nok: [compute-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}\n\nTASK [ceph-docker-common : check if a cluster is already running] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42\nFriday 21 September 2018 08:29:59 -0400 (0:00:00.085) 0:03:41.183 ****** \nok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-compute-0\"], \"delta\": \"0:00:00.025713\", \"end\": \"2018-09-21 12:29:59.296573\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:29:59.270860\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2\nFriday 21 September 2018 08:29:59 -0400 (0:00:00.265) 0:03:41.448 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13\nFriday 21 September 2018 08:29:59 -0400 (0:00:00.054) 0:03:41.503 ****** \nskipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20\nFriday 21 September 2018 08:29:59 -0400 (0:00:00.061) 0:03:41.565 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25\nFriday 21 September 2018 08:29:59 -0400 (0:00:00.065) 0:03:41.630 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : stat for ceph config and keys] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30\nFriday 21 September 2018 08:29:59 -0400 (0:00:00.059) 0:03:41.690 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : fail if we find existing cluster files] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5\nFriday 21 September 2018 08:29:59 -0400 (0:00:00.053) 0:03:41.743 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on atomic] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2\nFriday 21 September 2018 08:29:59 -0400 (0:00:00.058) 0:03:41.802 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6\nFriday 21 September 2018 08:29:59 -0400 (0:00:00.051) 0:03:41.854 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on redhat or suse] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2\nFriday 21 September 2018 08:29:59 -0400 (0:00:00.055) 0:03:41.909 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on redhat or suse] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13\nFriday 21 September 2018 08:29:59 -0400 (0:00:00.067) 0:03:41.976 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7\nFriday 21 September 2018 08:29:59 -0400 (0:00:00.062) 0:03:42.038 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on debian] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2\nFriday 21 September 2018 08:29:59 -0400 (0:00:00.058) 0:03:42.097 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on debian] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11\nFriday 21 September 2018 08:30:00 -0400 (0:00:00.057) 0:03:42.155 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7\nFriday 21 September 2018 08:30:00 -0400 (0:00:00.054) 0:03:42.210 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mon container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3\nFriday 21 September 2018 08:30:00 -0400 (0:00:00.050) 0:03:42.260 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph osd container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12\nFriday 21 September 2018 08:30:00 -0400 (0:00:00.054) 0:03:42.314 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mds container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21\nFriday 21 September 2018 08:30:00 -0400 (0:00:00.053) 0:03:42.368 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rgw container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30\nFriday 21 September 2018 08:30:00 -0400 (0:00:00.051) 0:03:42.420 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mgr container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39\nFriday 21 September 2018 08:30:00 -0400 (0:00:00.051) 0:03:42.472 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rbd mirror container] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48\nFriday 21 September 2018 08:30:00 -0400 (0:00:00.054) 0:03:42.526 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph nfs container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57\nFriday 21 September 2018 08:30:00 -0400 (0:00:00.051) 0:03:42.577 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67\nFriday 21 September 2018 08:30:00 -0400 (0:00:00.052) 0:03:42.630 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76\nFriday 21 September 2018 08:30:00 -0400 (0:00:00.056) 0:03:42.686 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85\nFriday 21 September 2018 08:30:00 -0400 (0:00:00.050) 0:03:42.737 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94\nFriday 21 September 2018 08:30:00 -0400 (0:00:00.048) 0:03:42.785 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103\nFriday 21 September 2018 08:30:00 -0400 (0:00:00.049) 0:03:42.835 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112\nFriday 21 September 2018 08:30:00 -0400 (0:00:00.050) 0:03:42.885 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121\nFriday 21 September 2018 08:30:00 -0400 (0:00:00.049) 0:03:42.935 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130\nFriday 21 September 2018 08:30:00 -0400 (0:00:00.059) 0:03:42.994 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137\nFriday 21 September 2018 08:30:00 -0400 (0:00:00.050) 0:03:43.045 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144\nFriday 21 September 2018 08:30:00 -0400 (0:00:00.051) 0:03:43.096 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151\nFriday 21 September 2018 08:30:01 -0400 (0:00:00.052) 0:03:43.149 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158\nFriday 21 September 2018 08:30:01 -0400 (0:00:00.050) 0:03:43.199 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165\nFriday 21 September 2018 08:30:01 -0400 (0:00:00.050) 0:03:43.250 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172\nFriday 21 September 2018 08:30:01 -0400 (0:00:00.057) 0:03:43.307 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-12 image] ********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179\nFriday 21 September 2018 08:30:01 -0400 (0:00:00.052) 0:03:43.360 ****** \nok: [compute-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:13.506012\", \"end\": \"2018-09-21 12:30:14.957705\", \"rc\": 0, \"start\": \"2018-09-21 12:30:01.451693\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-12: Pulling from 192.168.24.1:8787/rhceph\\n428a9ca37f0e: Pulling fs layer\\n8115a58d83bd: Pulling fs layer\\n5e409f26eefe: Pulling fs layer\\n8115a58d83bd: Download complete\\n428a9ca37f0e: Verifying Checksum\\n428a9ca37f0e: Download complete\\n5e409f26eefe: Verifying Checksum\\n5e409f26eefe: Download complete\\n428a9ca37f0e: Pull complete\\n8115a58d83bd: Pull complete\\n5e409f26eefe: Pull complete\\nDigest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-12: Pulling from 192.168.24.1:8787/rhceph\", \"428a9ca37f0e: Pulling fs layer\", \"8115a58d83bd: Pulling fs layer\", \"5e409f26eefe: Pulling fs layer\", \"8115a58d83bd: Download complete\", \"428a9ca37f0e: Verifying Checksum\", \"428a9ca37f0e: Download complete\", \"5e409f26eefe: Verifying Checksum\", \"5e409f26eefe: Download complete\", \"428a9ca37f0e: Pull complete\", \"8115a58d83bd: Pull complete\", \"5e409f26eefe: Pull complete\", \"Digest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\"]}\n\nTASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-12 image after pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189\nFriday 21 September 2018 08:30:15 -0400 (0:00:13.756) 0:03:57.116 ****** \nchanged: [compute-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:00.024215\", \"end\": \"2018-09-21 12:30:15.230189\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:30:15.205974\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-12\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 592066185,\\n \\\"VirtualSize\\\": 592066185,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/d3ab66fd3e2597dbe6fc7283ddb6892de69d901df9f733b3e6c08b44844d82eb/diff:/var/lib/docker/overlay2/fb0ca68008a4f6a2f8fe648a8a5da392d76df2d766b5494fff02e603d0bbd0a8/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/b7a38e55a0284b5bd8ffe62ac3c56122035f224aa471de3dfe30baf1dd98a92d/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/b7a38e55a0284b5bd8ffe62ac3c56122035f224aa471de3dfe30baf1dd98a92d/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/b7a38e55a0284b5bd8ffe62ac3c56122035f224aa471de3dfe30baf1dd98a92d/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\\n \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\\n \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-12\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 592066185,\", \" \\\"VirtualSize\\\": 592066185,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/d3ab66fd3e2597dbe6fc7283ddb6892de69d901df9f733b3e6c08b44844d82eb/diff:/var/lib/docker/overlay2/fb0ca68008a4f6a2f8fe648a8a5da392d76df2d766b5494fff02e603d0bbd0a8/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/b7a38e55a0284b5bd8ffe62ac3c56122035f224aa471de3dfe30baf1dd98a92d/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/b7a38e55a0284b5bd8ffe62ac3c56122035f224aa471de3dfe30baf1dd98a92d/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/b7a38e55a0284b5bd8ffe62ac3c56122035f224aa471de3dfe30baf1dd98a92d/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\", \" \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\", \" \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194\nFriday 21 September 2018 08:30:15 -0400 (0:00:00.280) 0:03:57.397 ****** \nok: [compute-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200\nFriday 21 September 2018 08:30:15 -0400 (0:00:00.088) 0:03:57.486 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211\nFriday 21 September 2018 08:30:15 -0400 (0:00:00.052) 0:03:57.539 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222\nFriday 21 September 2018 08:30:15 -0400 (0:00:00.049) 0:03:57.589 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233\nFriday 21 September 2018 08:30:15 -0400 (0:00:00.055) 0:03:57.644 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244\nFriday 21 September 2018 08:30:15 -0400 (0:00:00.055) 0:03:57.700 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255\nFriday 21 September 2018 08:30:15 -0400 (0:00:00.051) 0:03:57.751 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266\nFriday 21 September 2018 08:30:15 -0400 (0:00:00.050) 0:03:57.801 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : export local ceph dev image] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277\nFriday 21 September 2018 08:30:15 -0400 (0:00:00.056) 0:03:57.858 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : copy ceph dev image file] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285\nFriday 21 September 2018 08:30:15 -0400 (0:00:00.049) 0:03:57.908 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : load ceph dev image] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292\nFriday 21 September 2018 08:30:15 -0400 (0:00:00.046) 0:03:57.955 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove tmp ceph dev image file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297\nFriday 21 September 2018 08:30:15 -0400 (0:00:00.052) 0:03:58.008 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get ceph version] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84\nFriday 21 September 2018 08:30:15 -0400 (0:00:00.048) 0:03:58.056 ****** \nok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-12\", \"--version\"], \"delta\": \"0:00:00.441057\", \"end\": \"2018-09-21 12:30:16.689421\", \"rc\": 0, \"start\": \"2018-09-21 12:30:16.248364\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\"]}\n\nTASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90\nFriday 21 September 2018 08:30:16 -0400 (0:00:00.786) 0:03:58.843 ****** \nok: [compute-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-42.el7cp\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release jewel] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2\nFriday 21 September 2018 08:30:16 -0400 (0:00:00.083) 0:03:58.926 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release kraken] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8\nFriday 21 September 2018 08:30:16 -0400 (0:00:00.050) 0:03:58.977 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release luminous] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14\nFriday 21 September 2018 08:30:16 -0400 (0:00:00.061) 0:03:59.038 ****** \nok: [compute-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release mimic] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20\nFriday 21 September 2018 08:30:17 -0400 (0:00:00.207) 0:03:59.246 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release nautilus] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26\nFriday 21 September 2018 08:30:17 -0400 (0:00:00.050) 0:03:59.296 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : create bootstrap directories] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2\nFriday 21 September 2018 08:30:17 -0400 (0:00:00.054) 0:03:59.351 ****** \nchanged: [compute-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\n\nTASK [ceph-config : create ceph conf directory] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4\nFriday 21 September 2018 08:30:18 -0400 (0:00:00.915) 0:04:00.266 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate ceph configuration file: ceph.conf] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12\nFriday 21 September 2018 08:30:18 -0400 (0:00:00.048) 0:04:00.314 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : create a local fetch directory if it does not exist] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38\nFriday 21 September 2018 08:30:18 -0400 (0:00:00.052) 0:04:00.367 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate cluster uuid] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54\nFriday 21 September 2018 08:30:18 -0400 (0:00:00.062) 0:04:00.429 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : read cluster uuid if it already exists] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64\nFriday 21 September 2018 08:30:18 -0400 (0:00:00.052) 0:04:00.482 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nFriday 21 September 2018 08:30:18 -0400 (0:00:00.049) 0:04:00.532 ****** \nchanged: [compute-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nFriday 21 September 2018 08:30:18 -0400 (0:00:00.362) 0:04:00.895 ****** \nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy mon restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for compute-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy osd restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for compute-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy mds restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for compute-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy rgw restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for compute-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy mgr restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for compute-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for compute-0\nchanged: [compute-0] => {\"changed\": true, \"checksum\": \"47fa113e6b0aba60bb5249f924dcb7ca6e8dca0c\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"c3aeecdba6e11cab925f4842591b2d45\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 1320, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537533018.95-127994901059470/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-config : set fsid fact when generate_fsid = true] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102\nFriday 21 September 2018 08:30:21 -0400 (0:00:02.288) 0:04:03.183 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-client : copy ceph admin keyring when non containerized deployment] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml:2\nFriday 21 September 2018 08:30:21 -0400 (0:00:00.050) 0:04:03.234 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-client : set_fact keys_tmp - preserve backward compatibility after the introduction of the ceph_keys module] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:2\nFriday 21 September 2018 08:30:21 -0400 (0:00:00.055) 0:04:03.289 ****** \nskipping: [compute-0] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==', u'name': u'client.openstack'}) => {\"changed\": false, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [compute-0] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==', u'name': u'client.manila'}) => {\"changed\": false, \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [compute-0] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==', u'name': u'client.radosgw'}) => {\"changed\": false, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-client : set_fact keys - override keys_tmp with keys] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:9\nFriday 21 September 2018 08:30:21 -0400 (0:00:00.092) 0:04:03.382 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-client : run a dummy container (sleep 300) from where we can create pool(s)/key(s)] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:15\nFriday 21 September 2018 08:30:21 -0400 (0:00:00.065) 0:04:03.447 ****** \nok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"-d\", \"-v\", \"/etc/ceph:/etc/ceph:z\", \"--name\", \"ceph-create-keys\", \"--entrypoint=sleep\", \"192.168.24.1:8787/rhceph:3-12\", \"300\"], \"delta\": \"0:00:00.210257\", \"end\": \"2018-09-21 12:30:21.744937\", \"rc\": 0, \"start\": \"2018-09-21 12:30:21.534680\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"94e76e8372b8e6524bd2fa9c447c557a45a925ed060eab206355cac799db4024\", \"stdout_lines\": [\"94e76e8372b8e6524bd2fa9c447c557a45a925ed060eab206355cac799db4024\"]}\n\nTASK [ceph-client : set_fact delegated_node] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:30\nFriday 21 September 2018 08:30:21 -0400 (0:00:00.450) 0:04:03.897 ****** \nok: [compute-0] => {\"ansible_facts\": {\"delegated_node\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-client : set_fact condition_copy_admin_key] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:34\nFriday 21 September 2018 08:30:21 -0400 (0:00:00.076) 0:04:03.973 ****** \nok: [compute-0] => {\"ansible_facts\": {\"condition_copy_admin_key\": true}, \"changed\": false}\n\nTASK [ceph-client : set_fact docker_exec_cmd] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:38\nFriday 21 September 2018 08:30:21 -0400 (0:00:00.077) 0:04:04.051 ****** \nok: [compute-0] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0 \"}, \"changed\": false}\n\nTASK [ceph-client : create cephx key(s)] ***************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:44\nFriday 21 September 2018 08:30:22 -0400 (0:00:00.145) 0:04:04.196 ****** \nchanged: [compute-0 -> 192.168.24.18] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==', u'name': u'client.openstack'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph/ceph.client.openstack.keyring\"], \"delta\": \"0:00:00.867358\", \"end\": \"2018-09-21 12:30:23.169661\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"rc\": 0, \"start\": \"2018-09-21 12:30:22.302303\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [compute-0 -> 192.168.24.18] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==', u'name': u'client.manila'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph/ceph.client.manila.keyring\"], \"delta\": \"0:00:00.928620\", \"end\": \"2018-09-21 12:30:24.277324\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"rc\": 0, \"start\": \"2018-09-21 12:30:23.348704\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [compute-0 -> 192.168.24.18] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==', u'name': u'client.radosgw'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph/ceph.client.radosgw.keyring\"], \"delta\": \"0:00:00.912810\", \"end\": \"2018-09-21 12:30:25.374281\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"rc\": 0, \"start\": \"2018-09-21 12:30:24.461471\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-client : slurp client cephx key(s)] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:62\nFriday 21 September 2018 08:30:25 -0400 (0:00:03.362) 0:04:07.559 ****** \nok: [compute-0 -> 192.168.24.18] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==', u'name': u'client.openstack'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUM5M0tSYkFBQUFBQkFBNzB2WG1YRUxKV2RxUHRnNEllUUh6dz09CgljYXBzIG1nciA9ICJhbGxvdyAqIgoJY2FwcyBtb24gPSAicHJvZmlsZSByYmQiCgljYXBzIG9zZCA9ICJwcm9maWxlIHJiZCBwb29sPXZvbHVtZXMsIHByb2ZpbGUgcmJkIHBvb2w9YmFja3VwcywgcHJvZmlsZSByYmQgcG9vbD12bXMsIHByb2ZpbGUgcmJkIHBvb2w9aW1hZ2VzLCBwcm9maWxlIHJiZCBwb29sPW1ldHJpY3MiCg==\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"source\": \"/etc/ceph/ceph.client.openstack.keyring\"}\nok: [compute-0 -> 192.168.24.18] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==', u'name': u'client.manila'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUM5M0tSYkFBQUFBQkFBaDNXUUVyYVl2b0dKQmNXV2VBZ2xZZz09CgljYXBzIG1kcyA9ICJhbGxvdyAqIgoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHIsIGFsbG93IGNvbW1hbmQgJ2F1dGggZGVsJywgYWxsb3cgY29tbWFuZCAnYXV0aCBjYXBzJywgYWxsb3cgY29tbWFuZCAnYXV0aCBnZXQnLCBhbGxvdyBjb21tYW5kICdhdXRoIGdldC1vci1jcmVhdGUnIgoJY2FwcyBvc2QgPSAiYWxsb3cgcnciCg==\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"source\": \"/etc/ceph/ceph.client.manila.keyring\"}\nok: [compute-0 -> 192.168.24.18] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==', u'name': u'client.radosgw'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFDOTNLUmJBQUFBQUJBQUpLL0FkT0N1YTlVT2NDR2V2ZSt6WUE9PQoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHJ3IgoJY2FwcyBvc2QgPSAiYWxsb3cgcnd4Igo=\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"source\": \"/etc/ceph/ceph.client.radosgw.keyring\"}\n\nTASK [ceph-client : list existing pool(s)] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:74\nFriday 21 September 2018 08:30:26 -0400 (0:00:00.603) 0:04:08.162 ****** \n\nTASK [ceph-client : create ceph pool(s)] ***************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:86\nFriday 21 September 2018 08:30:26 -0400 (0:00:00.048) 0:04:08.211 ****** \n\nTASK [ceph-client : get client cephx keys] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:109\nFriday 21 September 2018 08:30:26 -0400 (0:00:00.045) 0:04:08.257 ****** \nchanged: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUM5M0tSYkFBQUFBQkFBNzB2WG1YRUxKV2RxUHRnNEllUUh6dz09CgljYXBzIG1nciA9ICJhbGxvdyAqIgoJY2FwcyBtb24gPSAicHJvZmlsZSByYmQiCgljYXBzIG9zZCA9ICJwcm9maWxlIHJiZCBwb29sPXZvbHVtZXMsIHByb2ZpbGUgcmJkIHBvb2w9YmFja3VwcywgcHJvZmlsZSByYmQgcG9vbD12bXMsIHByb2ZpbGUgcmJkIHBvb2w9aW1hZ2VzLCBwcm9maWxlIHJiZCBwb29sPW1ldHJpY3MiCg==', 'failed': False, u'source': u'/etc/ceph/ceph.client.openstack.keyring', 'item': {u'mode': u'0600', u'name': u'client.openstack', u'key': u'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==', u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.openstack.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.18'}, '_ansible_ignore_errors': None, '_ansible_item_label': {u'name': u'client.openstack', u'mode': u'0600', u'key': u'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==', u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}}}) => {\"changed\": true, \"checksum\": \"40ed8b50cf9c2c93b1fd620a66672adaecbdd5ae\", \"dest\": \"/etc/ceph/ceph.client.openstack.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUM5M0tSYkFBQUFBQkFBNzB2WG1YRUxKV2RxUHRnNEllUUh6dz09CgljYXBzIG1nciA9ICJhbGxvdyAqIgoJY2FwcyBtb24gPSAicHJvZmlsZSByYmQiCgljYXBzIG9zZCA9ICJwcm9maWxlIHJiZCBwb29sPXZvbHVtZXMsIHByb2ZpbGUgcmJkIHBvb2w9YmFja3VwcywgcHJvZmlsZSByYmQgcG9vbD12bXMsIHByb2ZpbGUgcmJkIHBvb2w9aW1hZ2VzLCBwcm9maWxlIHJiZCBwb29sPW1ldHJpY3MiCg==\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.openstack.keyring\"}}, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"source\": \"/etc/ceph/ceph.client.openstack.keyring\"}, \"md5sum\": \"a6757c87664e50e0fa2a4a0c24ffa2db\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 253, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537533026.24-153171000360498/source\", \"state\": \"file\", \"uid\": 167}\nchanged: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUM5M0tSYkFBQUFBQkFBaDNXUUVyYVl2b0dKQmNXV2VBZ2xZZz09CgljYXBzIG1kcyA9ICJhbGxvdyAqIgoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHIsIGFsbG93IGNvbW1hbmQgJ2F1dGggZGVsJywgYWxsb3cgY29tbWFuZCAnYXV0aCBjYXBzJywgYWxsb3cgY29tbWFuZCAnYXV0aCBnZXQnLCBhbGxvdyBjb21tYW5kICdhdXRoIGdldC1vci1jcmVhdGUnIgoJY2FwcyBvc2QgPSAiYWxsb3cgcnciCg==', 'failed': False, u'source': u'/etc/ceph/ceph.client.manila.keyring', 'item': {u'mode': u'0600', u'name': u'client.manila', u'key': u'AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==', u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", u'mgr': u'allow *'}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.manila.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.18'}, '_ansible_ignore_errors': None, '_ansible_item_label': {u'name': u'client.manila', u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==', u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", u'mgr': u'allow *'}}}) => {\"changed\": true, \"checksum\": \"e119bc7d0367829cffba7f254fed5c0f7663e7a7\", \"dest\": \"/etc/ceph/ceph.client.manila.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUM5M0tSYkFBQUFBQkFBaDNXUUVyYVl2b0dKQmNXV2VBZ2xZZz09CgljYXBzIG1kcyA9ICJhbGxvdyAqIgoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHIsIGFsbG93IGNvbW1hbmQgJ2F1dGggZGVsJywgYWxsb3cgY29tbWFuZCAnYXV0aCBjYXBzJywgYWxsb3cgY29tbWFuZCAnYXV0aCBnZXQnLCBhbGxvdyBjb21tYW5kICdhdXRoIGdldC1vci1jcmVhdGUnIgoJY2FwcyBvc2QgPSAiYWxsb3cgcnciCg==\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.manila.keyring\"}}, \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"source\": \"/etc/ceph/ceph.client.manila.keyring\"}, \"md5sum\": \"d42eb2e49e090ff13248fba0db5c0a6f\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 268, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537533026.7-38127782555482/source\", \"state\": \"file\", \"uid\": 167}\nchanged: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFDOTNLUmJBQUFBQUJBQUpLL0FkT0N1YTlVT2NDR2V2ZSt6WUE9PQoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHJ3IgoJY2FwcyBvc2QgPSAiYWxsb3cgcnd4Igo=', 'failed': False, u'source': u'/etc/ceph/ceph.client.radosgw.keyring', 'item': {u'mode': u'0600', u'name': u'client.radosgw', u'key': u'AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==', u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.radosgw.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.18'}, '_ansible_ignore_errors': None, '_ansible_item_label': {u'name': u'client.radosgw', u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==', u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}}}) => {\"changed\": true, \"checksum\": \"32018e3d91a7d0c0ff43f9db5459f66424dd1f38\", \"dest\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFDOTNLUmJBQUFBQUJBQUpLL0FkT0N1YTlVT2NDR2V2ZSt6WUE9PQoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHJ3IgoJY2FwcyBvc2QgPSAiYWxsb3cgcnd4Igo=\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.radosgw.keyring\"}}, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"source\": \"/etc/ceph/ceph.client.radosgw.keyring\"}, \"md5sum\": \"18e4740c17d5c0f4ef7090358897cb02\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 134, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537533027.17-108306519705611/source\", \"state\": \"file\", \"uid\": 167}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******\nFriday 21 September 2018 08:30:27 -0400 (0:00:01.491) 0:04:09.749 ****** \nok: [compute-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mon restart script] **********************\nFriday 21 September 2018 08:30:27 -0400 (0:00:00.174) 0:04:09.923 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***\nFriday 21 September 2018 08:30:27 -0400 (0:00:00.051) 0:04:09.975 ****** \nskipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******\nFriday 21 September 2018 08:30:27 -0400 (0:00:00.085) 0:04:10.060 ****** \nskipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********\nFriday 21 September 2018 08:30:28 -0400 (0:00:00.083) 0:04:10.144 ****** \nok: [compute-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******\nFriday 21 September 2018 08:30:28 -0400 (0:00:00.178) 0:04:10.323 ****** \nok: [compute-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy osd restart script] **********************\nFriday 21 September 2018 08:30:28 -0400 (0:00:00.187) 0:04:10.510 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***\nFriday 21 September 2018 08:30:28 -0400 (0:00:00.044) 0:04:10.555 ****** \nskipping: [compute-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******\nFriday 21 September 2018 08:30:28 -0400 (0:00:00.086) 0:04:10.642 ****** \nskipping: [compute-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********\nFriday 21 September 2018 08:30:28 -0400 (0:00:00.091) 0:04:10.734 ****** \nok: [compute-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******\nFriday 21 September 2018 08:30:28 -0400 (0:00:00.185) 0:04:10.919 ****** \nok: [compute-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mds restart script] **********************\nFriday 21 September 2018 08:30:28 -0400 (0:00:00.160) 0:04:11.080 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***\nFriday 21 September 2018 08:30:29 -0400 (0:00:00.048) 0:04:11.128 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******\nFriday 21 September 2018 08:30:29 -0400 (0:00:00.059) 0:04:11.188 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********\nFriday 21 September 2018 08:30:29 -0400 (0:00:00.060) 0:04:11.248 ****** \nok: [compute-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******\nFriday 21 September 2018 08:30:29 -0400 (0:00:00.076) 0:04:11.325 ****** \nok: [compute-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************\nFriday 21 September 2018 08:30:29 -0400 (0:00:00.082) 0:04:11.408 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***\nFriday 21 September 2018 08:30:29 -0400 (0:00:00.049) 0:04:11.457 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******\nFriday 21 September 2018 08:30:29 -0400 (0:00:00.056) 0:04:11.514 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********\nFriday 21 September 2018 08:30:29 -0400 (0:00:00.055) 0:04:11.569 ****** \nok: [compute-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***\nFriday 21 September 2018 08:30:29 -0400 (0:00:00.075) 0:04:11.645 ****** \nok: [compute-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************\nFriday 21 September 2018 08:30:29 -0400 (0:00:00.077) 0:04:11.722 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***\nFriday 21 September 2018 08:30:29 -0400 (0:00:00.047) 0:04:11.770 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***\nFriday 21 September 2018 08:30:29 -0400 (0:00:00.059) 0:04:11.829 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***\nFriday 21 September 2018 08:30:29 -0400 (0:00:00.060) 0:04:11.890 ****** \nok: [compute-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******\nFriday 21 September 2018 08:30:29 -0400 (0:00:00.079) 0:04:11.969 ****** \nok: [compute-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************\nFriday 21 September 2018 08:30:29 -0400 (0:00:00.079) 0:04:12.048 ****** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***\nFriday 21 September 2018 08:30:29 -0400 (0:00:00.044) 0:04:12.093 ****** \nskipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******\nFriday 21 September 2018 08:30:30 -0400 (0:00:00.088) 0:04:12.181 ****** \nskipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********\nFriday 21 September 2018 08:30:30 -0400 (0:00:00.094) 0:04:12.276 ****** \nok: [compute-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [set ceph client install 'Complete'] **************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:325\nFriday 21 September 2018 08:30:30 -0400 (0:00:00.103) 0:04:12.380 ****** \nok: [compute-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_client\": {\"end\": \"20180921083030Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nPLAY RECAP *********************************************************************\nceph-0 : ok=88 changed=19 unreachable=0 failed=0 \ncompute-0 : ok=55 changed=7 unreachable=0 failed=0 \ncontroller-0 : ok=121 changed=22 unreachable=0 failed=0 \n\n\nINSTALLER STATUS ***************************************************************\nInstall Ceph Monitor : Complete (0:01:02)\nInstall Ceph Manager : Complete (0:00:25)\nInstall Ceph OSD : Complete (0:01:49)\nInstall Ceph Client : Complete (0:00:40)\n\nFriday 21 September 2018 08:30:30 -0400 (0:00:00.074) 0:04:12.454 ****** \n=============================================================================== ", "stdout_lines": ["ansible-playbook 2.5.7", " config file = /usr/share/ceph-ansible/ansible.cfg", " configured module search path = [u'/usr/share/ceph-ansible/library']", " ansible python module location = /usr/lib/python2.7/site-packages/ansible", " executable location = /usr/bin/ansible-playbook", " python version = 2.7.5 (default, May 31 2018, 09:41:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]", "Using /usr/share/ceph-ansible/ansible.cfg as config file", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/non_containerized.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/containerized.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rgw/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/start_rbd_mirror.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/configure_mirroring.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/start_docker_rbd_mirror.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/start_restapi.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/copy_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/start_docker_restapi.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml", "", "PLAYBOOK: site-docker.yml.sample ***********************************************", "12 plays in /usr/share/ceph-ansible/site-docker.yml.sample", "", "PLAY [mons,agents,osds,mdss,rgws,nfss,restapis,rbdmirrors,clients,iscsigws,iscsi-gws,mgrs] ***", "", "TASK [gather facts] ************************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:25", "Friday 21 September 2018 08:26:18 -0400 (0:00:00.240) 0:00:00.240 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [gather and delegate facts] ***********************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:30", "Friday 21 September 2018 08:26:18 -0400 (0:00:00.098) 0:00:00.338 ****** ", "ok: [controller-0 -> 192.168.24.18] => (item=controller-0)", "ok: [controller-0 -> 192.168.24.8] => (item=compute-0)", "ok: [controller-0 -> 192.168.24.6] => (item=ceph-0)", "", "TASK [check if it is atomic host] **********************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:39", "Friday 21 September 2018 08:26:31 -0400 (0:00:13.319) 0:00:13.658 ****** ", "ok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "ok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "ok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [set_fact is_atomic] ******************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:46", "Friday 21 September 2018 08:26:32 -0400 (0:00:00.509) 0:00:14.167 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "ok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "ok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "TASK [pull rhceph image] *******************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:67", "Friday 21 September 2018 08:26:32 -0400 (0:00:00.164) 0:00:14.331 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "META: ran handlers", "", "PLAY [mons] ********************************************************************", "META: ran handlers", "", "TASK [set ceph monitor install 'In Progress'] **********************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:77", "Friday 21 September 2018 08:26:32 -0400 (0:00:00.144) 0:00:14.476 ****** ", "ok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"start\": \"20180921082632Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "PLAY [mons] ********************************************************************", "META: ran handlers", "", "TASK [ceph-defaults : check for a mon container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2", "Friday 21 September 2018 08:26:32 -0400 (0:00:00.289) 0:00:14.765 ****** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.029722\", \"end\": \"2018-09-21 12:26:33.069536\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:26:33.039814\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for an osd container] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11", "Friday 21 September 2018 08:26:33 -0400 (0:00:00.469) 0:00:15.235 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mds container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20", "Friday 21 September 2018 08:26:33 -0400 (0:00:00.052) 0:00:15.288 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rgw container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29", "Friday 21 September 2018 08:26:33 -0400 (0:00:00.052) 0:00:15.340 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mgr container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38", "Friday 21 September 2018 08:26:33 -0400 (0:00:00.236) 0:00:15.577 ****** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.023782\", \"end\": \"2018-09-21 12:26:33.687706\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:26:33.663924\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for a rbd mirror container] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47", "Friday 21 September 2018 08:26:33 -0400 (0:00:00.275) 0:00:15.853 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a nfs container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56", "Friday 21 September 2018 08:26:33 -0400 (0:00:00.053) 0:00:15.906 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mon socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2", "Friday 21 September 2018 08:26:33 -0400 (0:00:00.056) 0:00:15.962 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mon socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11", "Friday 21 September 2018 08:26:33 -0400 (0:00:00.052) 0:00:16.015 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21", "Friday 21 September 2018 08:26:33 -0400 (0:00:00.052) 0:00:16.067 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph osd socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30", "Friday 21 September 2018 08:26:34 -0400 (0:00:00.049) 0:00:16.116 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph osd socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40", "Friday 21 September 2018 08:26:34 -0400 (0:00:00.045) 0:00:16.162 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50", "Friday 21 September 2018 08:26:34 -0400 (0:00:00.046) 0:00:16.208 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mds socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59", "Friday 21 September 2018 08:26:34 -0400 (0:00:00.045) 0:00:16.254 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mds socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69", "Friday 21 September 2018 08:26:34 -0400 (0:00:00.047) 0:00:16.301 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79", "Friday 21 September 2018 08:26:34 -0400 (0:00:00.046) 0:00:16.348 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rgw socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88", "Friday 21 September 2018 08:26:34 -0400 (0:00:00.047) 0:00:16.395 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98", "Friday 21 September 2018 08:26:34 -0400 (0:00:00.044) 0:00:16.440 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108", "Friday 21 September 2018 08:26:34 -0400 (0:00:00.044) 0:00:16.485 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mgr socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117", "Friday 21 September 2018 08:26:34 -0400 (0:00:00.046) 0:00:16.532 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127", "Friday 21 September 2018 08:26:34 -0400 (0:00:00.051) 0:00:16.583 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137", "Friday 21 September 2018 08:26:34 -0400 (0:00:00.045) 0:00:16.628 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rbd mirror socket] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146", "Friday 21 September 2018 08:26:34 -0400 (0:00:00.046) 0:00:16.675 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156", "Friday 21 September 2018 08:26:34 -0400 (0:00:00.053) 0:00:16.728 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166", "Friday 21 September 2018 08:26:34 -0400 (0:00:00.053) 0:00:16.782 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175", "Friday 21 September 2018 08:26:34 -0400 (0:00:00.057) 0:00:16.839 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184", "Friday 21 September 2018 08:26:34 -0400 (0:00:00.054) 0:00:16.893 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194", "Friday 21 September 2018 08:26:34 -0400 (0:00:00.053) 0:00:16.947 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if it is atomic host] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2", "Friday 21 September 2018 08:26:34 -0400 (0:00:00.053) 0:00:17.001 ****** ", "ok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact is_atomic] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7", "Friday 21 September 2018 08:26:35 -0400 (0:00:00.239) 0:00:17.240 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11", "Friday 21 September 2018 08:26:35 -0400 (0:00:00.081) 0:00:17.321 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17", "Friday 21 September 2018 08:26:35 -0400 (0:00:00.081) 0:00:17.403 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact docker_exec_cmd] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23", "Friday 21 September 2018 08:26:35 -0400 (0:00:00.072) 0:00:17.476 ****** ", "ok: [controller-0 -> 192.168.24.18] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : is ceph running already?] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34", "Friday 21 September 2018 08:26:35 -0400 (0:00:00.142) 0:00:17.619 ****** ", "ok: [controller-0 -> 192.168.24.18] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"-s\", \"-f\", \"json\"], \"delta\": \"0:00:00.025305\", \"end\": \"2018-09-21 12:26:35.714561\", \"failed_when_result\": false, \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2018-09-21 12:26:35.689256\", \"stderr\": \"Error response from daemon: No such container: ceph-mon-controller-0\", \"stderr_lines\": [\"Error response from daemon: No such container: ceph-mon-controller-0\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check if /var/lib/mistral/overcloud/ceph-ansible/fetch_dir directory exists] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47", "Friday 21 September 2018 08:26:35 -0400 (0:00:00.261) 0:00:17.880 ****** ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact ceph_current_status rc 1] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57", "Friday 21 September 2018 08:26:35 -0400 (0:00:00.207) 0:00:18.088 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : create a local fetch directory if it does not exist] *****", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64", "Friday 21 September 2018 08:26:36 -0400 (0:00:00.058) 0:00:18.146 ****** ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 6, \"state\": \"directory\", \"uid\": 42430}", "", "TASK [ceph-defaults : set_fact ceph_current_status (convert to json)] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74", "Friday 21 September 2018 08:26:36 -0400 (0:00:00.454) 0:00:18.600 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid from ceph_current_status] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81", "Friday 21 September 2018 08:26:36 -0400 (0:00:00.060) 0:00:18.661 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:88", "Friday 21 September 2018 08:26:36 -0400 (0:00:00.051) 0:00:18.712 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}", "", "TASK [ceph-defaults : generate cluster fsid] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:92", "Friday 21 September 2018 08:26:36 -0400 (0:00:00.083) 0:00:18.795 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:103", "Friday 21 September 2018 08:26:36 -0400 (0:00:00.049) 0:00:18.845 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : read cluster fsid if it already exists] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:112", "Friday 21 September 2018 08:26:36 -0400 (0:00:00.052) 0:00:18.898 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:124", "Friday 21 September 2018 08:26:36 -0400 (0:00:00.056) 0:00:18.954 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:130", "Friday 21 September 2018 08:26:36 -0400 (0:00:00.051) 0:00:19.005 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:136", "Friday 21 September 2018 08:26:36 -0400 (0:00:00.083) 0:00:19.089 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142", "Friday 21 September 2018 08:26:37 -0400 (0:00:00.047) 0:00:19.137 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149", "Friday 21 September 2018 08:26:37 -0400 (0:00:00.090) 0:00:19.227 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156", "Friday 21 September 2018 08:26:37 -0400 (0:00:00.085) 0:00:19.313 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}", "", "TASK [ceph-defaults : resolve device link(s)] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:163", "Friday 21 September 2018 08:26:37 -0400 (0:00:00.087) 0:00:19.401 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:173", "Friday 21 September 2018 08:26:37 -0400 (0:00:00.055) 0:00:19.457 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build final devices list] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:182", "Friday 21 September 2018 08:26:37 -0400 (0:00:00.058) 0:00:19.515 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190", "Friday 21 September 2018 08:26:37 -0400 (0:00:00.053) 0:00:19.568 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197", "Friday 21 September 2018 08:26:37 -0400 (0:00:00.054) 0:00:19.623 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204", "Friday 21 September 2018 08:26:37 -0400 (0:00:00.052) 0:00:19.676 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211", "Friday 21 September 2018 08:26:37 -0400 (0:00:00.054) 0:00:19.730 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:218", "Friday 21 September 2018 08:26:37 -0400 (0:00:00.053) 0:00:19.784 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rgw_hostname - fqdn] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:225", "Friday 21 September 2018 08:26:37 -0400 (0:00:00.184) 0:00:19.969 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rgw_hostname - no fqdn] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:235", "Friday 21 September 2018 08:26:37 -0400 (0:00:00.051) 0:00:20.020 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_directories] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2", "Friday 21 September 2018 08:26:37 -0400 (0:00:00.058) 0:00:20.079 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}", "", "TASK [ceph-defaults : create ceph initial directories] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18", "Friday 21 September 2018 08:26:38 -0400 (0:00:00.183) 0:00:20.262 ****** ", "changed: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-docker-common : fail if systemd is not present] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2", "Friday 21 September 2018 08:26:40 -0400 (0:00:02.176) 0:00:22.438 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2", "Friday 21 September 2018 08:26:40 -0400 (0:00:00.051) 0:00:22.490 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11", "Friday 21 September 2018 08:26:40 -0400 (0:00:00.059) 0:00:22.550 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : warning deprecation for fqdn configuration] *********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:20", "Friday 21 September 2018 08:26:40 -0400 (0:00:00.049) 0:00:22.600 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove ceph udev rules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2", "Friday 21 September 2018 08:26:40 -0400 (0:00:00.142) 0:00:22.743 ****** ", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14", "Friday 21 September 2018 08:26:41 -0400 (0:00:00.417) 0:00:23.161 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20", "Friday 21 September 2018 08:26:41 -0400 (0:00:00.091) 0:00:23.252 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get docker version] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26", "Friday 21 September 2018 08:26:41 -0400 (0:00:00.050) 0:00:23.303 ****** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.021767\", \"end\": \"2018-09-21 12:26:41.399107\", \"rc\": 0, \"start\": \"2018-09-21 12:26:41.377340\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 6e3bb8e/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 6e3bb8e/1.13.1\"]}", "", "TASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32", "Friday 21 September 2018 08:26:41 -0400 (0:00:00.269) 0:00:23.572 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}", "", "TASK [ceph-docker-common : check if a cluster is already running] **************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42", "Friday 21 September 2018 08:26:41 -0400 (0:00:00.102) 0:00:23.675 ****** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.024426\", \"end\": \"2018-09-21 12:26:41.776775\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:26:41.752349\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2", "Friday 21 September 2018 08:26:41 -0400 (0:00:00.263) 0:00:23.939 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13", "Friday 21 September 2018 08:26:41 -0400 (0:00:00.102) 0:00:24.041 ****** ", "ok: [controller-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20", "Friday 21 September 2018 08:26:42 -0400 (0:00:00.153) 0:00:24.195 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25", "Friday 21 September 2018 08:26:42 -0400 (0:00:00.102) 0:00:24.297 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/v >ar/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : stat for ceph config and keys] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30", "Friday 21 September 2018 08:26:42 -0400 (0:00:00.106) 0:00:24.404 ****** ", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}", "", "TASK [ceph-docker-common : fail if we find existing cluster files] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5", "Friday 21 September 2018 08:26:43 -0400 (0:00:01.351) 0:00:25.755 ****** ", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/monmap-ceph'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/monmap-ceph\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mgr.controller-0.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on atomic] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2", "Friday 21 September 2018 08:26:43 -0400 (0:00:00.322) 0:00:26.077 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6", "Friday 21 September 2018 08:26:44 -0400 (0:00:00.046) 0:00:26.124 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on redhat or suse] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2", "Friday 21 September 2018 08:26:44 -0400 (0:00:00.044) 0:00:26.169 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on redhat or suse] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13", "Friday 21 September 2018 08:26:44 -0400 (0:00:00.053) 0:00:26.222 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7", "Friday 21 September 2018 08:26:44 -0400 (0:00:00.056) 0:00:26.279 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on debian] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2", "Friday 21 September 2018 08:26:44 -0400 (0:00:00.053) 0:00:26.332 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on debian] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11", "Friday 21 September 2018 08:26:44 -0400 (0:00:00.047) 0:00:26.379 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7", "Friday 21 September 2018 08:26:44 -0400 (0:00:00.049) 0:00:26.429 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mon container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3", "Friday 21 September 2018 08:26:44 -0400 (0:00:00.047) 0:00:26.477 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph osd container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12", "Friday 21 September 2018 08:26:44 -0400 (0:00:00.054) 0:00:26.532 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mds container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21", "Friday 21 September 2018 08:26:44 -0400 (0:00:00.056) 0:00:26.588 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rgw container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30", "Friday 21 September 2018 08:26:44 -0400 (0:00:00.050) 0:00:26.638 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mgr container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39", "Friday 21 September 2018 08:26:44 -0400 (0:00:00.051) 0:00:26.690 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rbd mirror container] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48", "Friday 21 September 2018 08:26:44 -0400 (0:00:00.060) 0:00:26.750 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph nfs container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57", "Friday 21 September 2018 08:26:44 -0400 (0:00:00.052) 0:00:26.803 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67", "Friday 21 September 2018 08:26:44 -0400 (0:00:00.059) 0:00:26.863 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76", "Friday 21 September 2018 08:26:44 -0400 (0:00:00.059) 0:00:26.922 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85", "Friday 21 September 2018 08:26:44 -0400 (0:00:00.055) 0:00:26.977 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94", "Friday 21 September 2018 08:26:44 -0400 (0:00:00.055) 0:00:27.033 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103", "Friday 21 September 2018 08:26:44 -0400 (0:00:00.058) 0:00:27.091 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112", "Friday 21 September 2018 08:26:45 -0400 (0:00:00.056) 0:00:27.148 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121", "Friday 21 September 2018 08:26:45 -0400 (0:00:00.059) 0:00:27.207 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130", "Friday 21 September 2018 08:26:45 -0400 (0:00:00.051) 0:00:27.259 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137", "Friday 21 September 2018 08:26:45 -0400 (0:00:00.059) 0:00:27.318 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144", "Friday 21 September 2018 08:26:45 -0400 (0:00:00.054) 0:00:27.373 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151", "Friday 21 September 2018 08:26:45 -0400 (0:00:00.052) 0:00:27.425 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158", "Friday 21 September 2018 08:26:45 -0400 (0:00:00.056) 0:00:27.482 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165", "Friday 21 September 2018 08:26:45 -0400 (0:00:00.056) 0:00:27.538 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172", "Friday 21 September 2018 08:26:45 -0400 (0:00:00.050) 0:00:27.589 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-12 image] ********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179", "Friday 21 September 2018 08:26:45 -0400 (0:00:00.053) 0:00:27.643 ****** ", "ok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:13.974754\", \"end\": \"2018-09-21 12:26:59.708263\", \"rc\": 0, \"start\": \"2018-09-21 12:26:45.733509\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-12: Pulling from 192.168.24.1:8787/rhceph\\n428a9ca37f0e: Pulling fs layer\\n8115a58d83bd: Pulling fs layer\\n5e409f26eefe: Pulling fs layer\\n8115a58d83bd: Verifying Checksum\\n8115a58d83bd: Download complete\\n428a9ca37f0e: Verifying Checksum\\n428a9ca37f0e: Download complete\\n5e409f26eefe: Verifying Checksum\\n5e409f26eefe: Download complete\\n428a9ca37f0e: Pull complete\\n8115a58d83bd: Pull complete\\n5e409f26eefe: Pull complete\\nDigest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-12: Pulling from 192.168.24.1:8787/rhceph\", \"428a9ca37f0e: Pulling fs layer\", \"8115a58d83bd: Pulling fs layer\", \"5e409f26eefe: Pulling fs layer\", \"8115a58d83bd: Verifying Checksum\", \"8115a58d83bd: Download complete\", \"428a9ca37f0e: Verifying Checksum\", \"428a9ca37f0e: Download complete\", \"5e409f26eefe: Verifying Checksum\", \"5e409f26eefe: Download complete\", \"428a9ca37f0e: Pull complete\", \"8115a58d83bd: Pull complete\", \"5e409f26eefe: Pull complete\", \"Digest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\"]}", "", "TASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-12 image after pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189", "Friday 21 September 2018 08:26:59 -0400 (0:00:14.231) 0:00:41.875 ****** ", "changed: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:00.027714\", \"end\": \"2018-09-21 12:27:00.071933\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:27:00.044219\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-12\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 592066185,\\n \\\"VirtualSize\\\": 592066185,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/72c93a89fa782b05dc2585f4d22d029cb845c66e38924914071ba886d94bed8c/diff:/var/lib/docker/overlay2/1c6053643a9c6bc0506bbea8ee537d1f921ebdc802eafc5cf82c4566e0c5bbd4/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\\n \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\\n \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-12\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 592066185,\", \" \\\"VirtualSize\\\": 592066185,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/72c93a89fa782b05dc2585f4d22d029cb845c66e38924914071ba886d94bed8c/diff:/var/lib/docker/overlay2/1c6053643a9c6bc0506bbea8ee537d1f921ebdc802eafc5cf82c4566e0c5bbd4/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\", \" \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\", \" \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194", "Friday 21 September 2018 08:27:00 -0400 (0:00:00.367) 0:00:42.242 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200", "Friday 21 September 2018 08:27:00 -0400 (0:00:00.075) 0:00:42.318 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211", "Friday 21 September 2018 08:27:00 -0400 (0:00:00.049) 0:00:42.367 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222", "Friday 21 September 2018 08:27:00 -0400 (0:00:00.045) 0:00:42.412 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233", "Friday 21 September 2018 08:27:00 -0400 (0:00:00.043) 0:00:42.456 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244", "Friday 21 September 2018 08:27:00 -0400 (0:00:00.045) 0:00:42.501 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255", "Friday 21 September 2018 08:27:00 -0400 (0:00:00.052) 0:00:42.554 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266", "Friday 21 September 2018 08:27:00 -0400 (0:00:00.045) 0:00:42.599 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : export local ceph dev image] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277", "Friday 21 September 2018 08:27:00 -0400 (0:00:00.124) 0:00:42.724 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : copy ceph dev image file] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285", "Friday 21 September 2018 08:27:00 -0400 (0:00:00.046) 0:00:42.770 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : load ceph dev image] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292", "Friday 21 September 2018 08:27:00 -0400 (0:00:00.049) 0:00:42.819 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove tmp ceph dev image file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297", "Friday 21 September 2018 08:27:00 -0400 (0:00:00.048) 0:00:42.867 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get ceph version] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84", "Friday 21 September 2018 08:27:00 -0400 (0:00:00.047) 0:00:42.915 ****** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-12\", \"--version\"], \"delta\": \"0:00:00.458745\", \"end\": \"2018-09-21 12:27:01.448737\", \"rc\": 0, \"start\": \"2018-09-21 12:27:00.989992\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\"]}", "", "TASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90", "Friday 21 September 2018 08:27:01 -0400 (0:00:00.699) 0:00:43.614 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-42.el7cp\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release jewel] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2", "Friday 21 September 2018 08:27:01 -0400 (0:00:00.077) 0:00:43.692 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release kraken] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8", "Friday 21 September 2018 08:27:01 -0400 (0:00:00.049) 0:00:43.741 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release luminous] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14", "Friday 21 September 2018 08:27:01 -0400 (0:00:00.049) 0:00:43.790 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release mimic] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20", "Friday 21 September 2018 08:27:01 -0400 (0:00:00.082) 0:00:43.873 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release nautilus] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26", "Friday 21 September 2018 08:27:01 -0400 (0:00:00.058) 0:00:43.931 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : create bootstrap directories] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2", "Friday 21 September 2018 08:27:01 -0400 (0:00:00.050) 0:00:43.981 ****** ", "changed: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "", "TASK [ceph-config : create ceph conf directory] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4", "Friday 21 September 2018 08:27:02 -0400 (0:00:00.875) 0:00:44.857 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate ceph configuration file: ceph.conf] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12", "Friday 21 September 2018 08:27:02 -0400 (0:00:00.052) 0:00:44.909 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : create a local fetch directory if it does not exist] *******", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38", "Friday 21 September 2018 08:27:02 -0400 (0:00:00.053) 0:00:44.963 ****** ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 6, \"state\": \"directory\", \"uid\": 42430}", "", "TASK [ceph-config : generate cluster uuid] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54", "Friday 21 September 2018 08:27:03 -0400 (0:00:00.198) 0:00:45.161 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : read cluster uuid if it already exists] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64", "Friday 21 September 2018 08:27:03 -0400 (0:00:00.055) 0:00:45.216 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : ensure /etc/ceph exists] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76", "Friday 21 September 2018 08:27:03 -0400 (0:00:00.047) 0:00:45.264 ****** ", "changed: [controller-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-config : generate ceph.conf configuration file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84", "Friday 21 September 2018 08:27:03 -0400 (0:00:00.246) 0:00:45.510 ****** ", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mon restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy osd restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mds restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy rgw restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mgr restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for controller-0", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"57e5c5d755a630f2e4e9c6766a186478cc210a6a\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"3d1c4a58fc488cca7c5fd19c6454272e\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 1103, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532823.45-141081049048416/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-config : set fsid fact when generate_fsid = true] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102", "Friday 21 September 2018 08:27:05 -0400 (0:00:02.432) 0:00:47.942 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact docker_exec_cmd] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:2", "Friday 21 September 2018 08:27:05 -0400 (0:00:00.052) 0:00:47.994 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-mon : make sure monitor_interface or monitor_address or monitor_address_block is configured] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml:2", "Friday 21 September 2018 08:27:06 -0400 (0:00:00.196) 0:00:48.190 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : generate monitor initial keyring] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:2", "Friday 21 September 2018 08:27:06 -0400 (0:00:00.063) 0:00:48.253 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : read monitor initial keyring if it already exists] ************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:11", "Friday 21 September 2018 08:27:06 -0400 (0:00:00.055) 0:00:48.308 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create monitor initial keyring] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:22", "Friday 21 September 2018 08:27:06 -0400 (0:00:00.051) 0:00:48.360 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set initial monitor key permissions] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:34", "Friday 21 September 2018 08:27:06 -0400 (0:00:00.049) 0:00:48.410 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create (and fix ownership of) monitor directory] **************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:42", "Friday 21 September 2018 08:27:06 -0400 (0:00:00.049) 0:00:48.459 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact client_admin_ceph_authtool_cap >= ceph_release_num.luminous] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:51", "Friday 21 September 2018 08:27:06 -0400 (0:00:00.046) 0:00:48.505 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact client_admin_ceph_authtool_cap < ceph_release_num.luminous] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:63", "Friday 21 September 2018 08:27:06 -0400 (0:00:00.131) 0:00:48.637 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create custom admin keyring] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:74", "Friday 21 September 2018 08:27:06 -0400 (0:00:00.052) 0:00:48.689 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set ownership of admin keyring] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:88", "Friday 21 September 2018 08:27:06 -0400 (0:00:00.049) 0:00:48.738 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : import admin keyring into mon keyring] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:99", "Friday 21 September 2018 08:27:06 -0400 (0:00:00.051) 0:00:48.790 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ceph monitor mkfs with keyring] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:106", "Friday 21 September 2018 08:27:06 -0400 (0:00:00.051) 0:00:48.841 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ceph monitor mkfs without keyring] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:113", "Friday 21 September 2018 08:27:06 -0400 (0:00:00.050) 0:00:48.891 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ensure systemd service override directory exists] *************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:2", "Friday 21 September 2018 08:27:06 -0400 (0:00:00.061) 0:00:48.953 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : add ceph-mon systemd service overrides] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:10", "Friday 21 September 2018 08:27:06 -0400 (0:00:00.051) 0:00:49.005 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : start the monitor service] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:20", "Friday 21 September 2018 08:27:06 -0400 (0:00:00.052) 0:00:49.057 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : enable the ceph-mon.target service] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:29", "Friday 21 September 2018 08:27:06 -0400 (0:00:00.050) 0:00:49.108 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : include ceph_keys.yml] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:19", "Friday 21 September 2018 08:27:07 -0400 (0:00:00.051) 0:00:49.159 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : collect all the pools] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:2", "Friday 21 September 2018 08:27:07 -0400 (0:00:00.053) 0:00:49.213 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : secure the cluster] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:7", "Friday 21 September 2018 08:27:07 -0400 (0:00:00.054) 0:00:49.268 ****** ", "", "TASK [ceph-mon : set_fact ceph_config_keys] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:2", "Friday 21 September 2018 08:27:07 -0400 (0:00:00.058) 0:00:49.326 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-mon : register rbd bootstrap key] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:11", "Friday 21 September 2018 08:27:07 -0400 (0:00:00.091) 0:00:49.418 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"bootstrap_rbd_keyring\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-mon : merge rbd bootstrap key to config and keys paths] *************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:17", "Friday 21 September 2018 08:27:07 -0400 (0:00:00.098) 0:00:49.516 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-mon : stat for ceph config and keys] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:22", "Friday 21 September 2018 08:27:07 -0400 (0:00:00.097) 0:00:49.614 ****** ", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}", "", "TASK [ceph-mon : try to copy ceph keys] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:33", "Friday 21 September 2018 08:27:08 -0400 (0:00:00.979) 0:00:50.593 ****** ", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : populate kv_store with default ceph.conf] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:2", "Friday 21 September 2018 08:27:08 -0400 (0:00:00.174) 0:00:50.767 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : populate kv_store with custom ceph.conf] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:18", "Friday 21 September 2018 08:27:08 -0400 (0:00:00.070) 0:00:50.838 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : delete populate-kv-store docker] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:36", "Friday 21 September 2018 08:27:08 -0400 (0:00:00.083) 0:00:50.921 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : generate systemd unit file] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:43", "Friday 21 September 2018 08:27:08 -0400 (0:00:00.054) 0:00:50.976 ****** ", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"b0ff5a5b5db5ad0a93c7412c072d8f645da2f45c\", \"dest\": \"/etc/systemd/system/ceph-mon@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"f0817dd50b4c8f886584edd030bb3021\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 887, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532828.92-103322074387440/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-mon : systemd start mon container] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:54", "Friday 21 September 2018 08:27:09 -0400 (0:00:00.953) 0:00:51.930 ****** ", "changed: [controller-0] => {\"changed\": true, \"enabled\": true, \"name\": \"ceph-mon@controller-0\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"systemd-journald.socket basic.target system-ceph\\\\x5cx2dmon.slice docker.service\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph Monitor\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker run --rm --name ceph-mon-%i --memory=3g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro --net=host -e IP_VERSION=4 -e MON_IP=172.17.3.16 -e CLUSTER=ceph -e FSID=8fedf068-bd95-11e8-ba69-5254006eda59 -e CEPH_PUBLIC_NETWORK=172.17.3.0/24 -e CEPH_DAEMON=MON 192.168.24.1:8787/rhceph:3-12 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStopPost\": \"{ path=/bin/rm ; argv[]=/bin/rm -f /var/run/ceph/ceph-mon.controller-0.asok ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-mon@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-mon@controller-0.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"127798\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"127798\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-mon@controller-0.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dmon.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dmon.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "", "TASK [ceph-mon : configure ceph profile.d aliases] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml:2", "Friday 21 September 2018 08:27:10 -0400 (0:00:00.706) 0:00:52.636 ****** ", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"78965c7dfcde4827c1cb8645bc7a444472e87718\", \"dest\": \"/etc/profile.d/ceph-aliases.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"66a9bfe5c26a22ade3c67cc7c7a58d2c\", \"mode\": \"0755\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:bin_t:s0\", \"size\": 375, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532830.57-111847234573165/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-mon : wait for monitor socket to exist] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:12", "Friday 21 September 2018 08:27:11 -0400 (0:00:00.535) 0:00:53.172 ****** ", "FAILED - RETRYING: wait for monitor socket to exist (5 retries left).", "changed: [controller-0] => {\"attempts\": 2, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"sh\", \"-c\", \"stat /var/run/ceph/ceph-mon.controller-0.asok || stat /var/run/ceph/ceph-mon.controller-0.localdomain.asok\"], \"delta\": \"0:00:00.083032\", \"end\": \"2018-09-21 12:27:26.587724\", \"rc\": 0, \"start\": \"2018-09-21 12:27:26.504692\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \" File: '/var/run/ceph/ceph-mon.controller-0.asok'\\n Size: 0 \\tBlocks: 0 IO Block: 4096 socket\\nDevice: 14h/20d\\tInode: 382696 Links: 1\\nAccess: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\\nAccess: 2018-09-21 12:27:11.506181792 +0000\\nModify: 2018-09-21 12:27:11.506181792 +0000\\nChange: 2018-09-21 12:27:11.506181792 +0000\\n Birth: -\", \"stdout_lines\": [\" File: '/var/run/ceph/ceph-mon.controller-0.asok'\", \" Size: 0 \\tBlocks: 0 IO Block: 4096 socket\", \"Device: 14h/20d\\tInode: 382696 Links: 1\", \"Access: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\", \"Access: 2018-09-21 12:27:11.506181792 +0000\", \"Modify: 2018-09-21 12:27:11.506181792 +0000\", \"Change: 2018-09-21 12:27:11.506181792 +0000\", \" Birth: -\"]}", "", "TASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:19", "Friday 21 September 2018 08:27:26 -0400 (0:00:15.581) 0:01:08.753 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:29", "Friday 21 September 2018 08:27:26 -0400 (0:00:00.099) 0:01:08.852 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:39", "Friday 21 September 2018 08:27:26 -0400 (0:00:00.095) 0:01:08.948 ****** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--admin-daemon\", \"/var/run/ceph/ceph-mon.controller-0.asok\", \"add_bootstrap_peer_hint\", \"172.17.3.16\"], \"delta\": \"0:00:00.173006\", \"end\": \"2018-09-21 12:27:27.397223\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:27:27.224217\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"mon already active; ignoring bootstrap hint\", \"stdout_lines\": [\"mon already active; ignoring bootstrap hint\"]}", "", "TASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:49", "Friday 21 September 2018 08:27:27 -0400 (0:00:00.609) 0:01:09.558 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:59", "Friday 21 September 2018 08:27:27 -0400 (0:00:00.055) 0:01:09.613 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:69", "Friday 21 September 2018 08:27:27 -0400 (0:00:00.058) 0:01:09.672 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : push ceph files to the ansible server] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml:2", "Friday 21 September 2018 08:27:27 -0400 (0:00:00.055) 0:01:09.727 ****** ", "changed: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": true, \"checksum\": \"9e373fe5b7239c71b2c20b1e9dda563cef508b10\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59/etc/ceph/ceph.client.admin.keyring\", \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"9d6426f968161a2e99954092fe0fea79\", \"remote_checksum\": \"9e373fe5b7239c71b2c20b1e9dda563cef508b10\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": true, \"checksum\": \"71985a44f030d17c775335c42962737bc688e6a0\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59/etc/ceph/ceph.mon.keyring\", \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"a5f024b9cde0ed26e54e699e93f2bf63\", \"remote_checksum\": \"71985a44f030d17c775335c42962737bc688e6a0\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"64333848b27ab8d9f98e1749b646f53ce8491e92\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"d0dcfd5572ae39eb0ce251488182ec1b\", \"remote_checksum\": \"64333848b27ab8d9f98e1749b646f53ce8491e92\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"ad253570a945c870140d7f94eccef76f44861e59\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"8c235791382cb359fb6d7d3577b15f8c\", \"remote_checksum\": \"ad253570a945c870140d7f94eccef76f44861e59\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"40b83591ce4be64f55769e0a0d8aca12db95c281\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"316046afda2f2cbb417dd97b099d7be1\", \"remote_checksum\": \"40b83591ce4be64f55769e0a0d8aca12db95c281\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"cf7920e30e8d8566b8b9f935a5f741908c23465e\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"babd454ca6e67b272f3dbad355f1a18d\", \"remote_checksum\": \"cf7920e30e8d8566b8b9f935a5f741908c23465e\", \"remote_md5sum\": null}", "", "TASK [ceph-mon : create ceph rest api keyring when mon is containerized] *******", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:84", "Friday 21 September 2018 08:27:28 -0400 (0:00:01.366) 0:01:11.094 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create ceph mgr keyring(s) when mon is containerized] *********", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:97", "Friday 21 September 2018 08:27:29 -0400 (0:00:00.054) 0:01:11.149 ****** ", "ok: [controller-0] => (item=controller-0) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"get-or-create\", \"mgr.controller-0\", \"mon\", \"allow profile mgr\", \"osd\", \"allow *\", \"mds\", \"allow *\", \"-o\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"], \"delta\": \"0:00:00.400066\", \"end\": \"2018-09-21 12:27:29.853798\", \"item\": \"controller-0\", \"rc\": 0, \"start\": \"2018-09-21 12:27:29.453732\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-mon : stat for ceph mgr key(s)] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:109", "Friday 21 September 2018 08:27:29 -0400 (0:00:00.865) 0:01:12.014 ****** ", "ok: [controller-0] => (item=controller-0) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"controller-0\", \"stat\": {\"atime\": 1537532849.7104473, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"us-ascii\", \"checksum\": \"f02fcb991c5a53a3bf474c15b6a514c8356b9c69\", \"ctime\": 1537532849.832449, \"dev\": 64514, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 0, \"gr_name\": \"root\", \"inode\": 50508107, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"text/plain\", \"mode\": \"0644\", \"mtime\": 1537532849.832449, \"nlink\": 1, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"root\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 0, \"version\": \"1817761372\", \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "", "TASK [ceph-mon : fetch ceph mgr key(s)] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:121", "Friday 21 September 2018 08:27:30 -0400 (0:00:00.394) 0:01:12.409 ****** ", "changed: [controller-0] => (item={'_ansible_parsed': True, u'stat': {u'charset': u'us-ascii', u'uid': 0, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1537532849.832449, u'block_size': 4096, u'inode': 50508107, u'isgid': False, u'size': 67, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': u'1817761372', u'pw_name': u'root', u'gid': 0, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'root', u'path': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'xusr': False, u'atime': 1537532849.7104473, u'mimetype': u'text/plain', u'ctime': 1537532849.832449, u'isblk': False, u'checksum': u'f02fcb991c5a53a3bf474c15b6a514c8356b9c69', u'dev': 64514, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, 'failed': False, u'changed': False, 'item': u'controller-0', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'controller-0'}) => {\"changed\": true, \"checksum\": \"f02fcb991c5a53a3bf474c15b6a514c8356b9c69\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59/etc/ceph/ceph.mgr.controller-0.keyring\", \"item\": {\"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"controller-0\", \"stat\": {\"atime\": 1537532849.7104473, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"us-ascii\", \"checksum\": \"f02fcb991c5a53a3bf474c15b6a514c8356b9c69\", \"ctime\": 1537532849.832449, \"dev\": 64514, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 0, \"gr_name\": \"root\", \"inode\": 50508107, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"text/plain\", \"mode\": \"0644\", \"mtime\": 1537532849.832449, \"nlink\": 1, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"root\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 0, \"version\": \"1817761372\", \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}, \"md5sum\": \"d7ba913d6ab2c770a0269d55efc01b88\", \"remote_checksum\": \"f02fcb991c5a53a3bf474c15b6a514c8356b9c69\", \"remote_md5sum\": null}", "", "TASK [ceph-mon : configure crush hierarchy] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:2", "Friday 21 September 2018 08:27:30 -0400 (0:00:00.410) 0:01:12.819 ****** ", "skipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create configured crush rules] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:14", "Friday 21 September 2018 08:27:30 -0400 (0:00:00.060) 0:01:12.880 ****** ", "skipping: [controller-0] => (item={u'default': False, u'root': u'HDD', u'type': u'host', u'name': u'HDD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item={u'default': False, u'root': u'SSD', u'type': u'host', u'name': u'SSD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : get id for new default crush rule] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:21", "Friday 21 September 2018 08:27:30 -0400 (0:00:00.064) 0:01:12.945 ****** ", "skipping: [controller-0] => (item={u'default': False, u'root': u'HDD', u'type': u'host', u'name': u'HDD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item={u'default': False, u'root': u'SSD', u'type': u'host', u'name': u'SSD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact info_ceph_default_crush_rule_yaml] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:33", "Friday 21 September 2018 08:27:30 -0400 (0:00:00.067) 0:01:13.013 ****** ", "skipping: [controller-0] => (item={'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {u'default': False, u'type': u'host', u'root': u'HDD', u'name': u'HDD'}, 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': {u'default': False, u'type': u'host', u'root': u'HDD', u'name': u'HDD'}}) => {\"changed\": false, \"item\": {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\", \"skipped\": true}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item={'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {u'default': False, u'type': u'host', u'root': u'SSD', u'name': u'SSD'}, 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': {u'default': False, u'type': u'host', u'root': u'SSD', u'name': u'SSD'}}) => {\"changed\": false, \"item\": {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\", \"skipped\": true}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact osd_pool_default_crush_rule to osd_pool_default_crush_replicated_ruleset if release < luminous else osd_pool_default_crush_rule] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:41", "Friday 21 September 2018 08:27:30 -0400 (0:00:00.066) 0:01:13.079 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : insert new default crush rule into daemon to prevent restart] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:45", "Friday 21 September 2018 08:27:31 -0400 (0:00:00.079) 0:01:13.158 ****** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : add new default crush rule to ceph.conf] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:54", "Friday 21 September 2018 08:27:31 -0400 (0:00:00.168) 0:01:13.327 ****** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : get default value for osd_pool_default_pg_num] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:5", "Friday 21 September 2018 08:27:31 -0400 (0:00:00.054) 0:01:13.382 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact osd_pool_default_pg_num with pool_default_pg_num (backward compatibility)] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:16", "Friday 21 September 2018 08:27:31 -0400 (0:00:00.052) 0:01:13.434 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact osd_pool_default_pg_num with default_pool_default_pg_num.stdout] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:21", "Friday 21 September 2018 08:27:31 -0400 (0:00:00.048) 0:01:13.483 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact osd_pool_default_pg_num ceph_conf_overrides.global.osd_pool_default_pg_num] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:27", "Friday 21 September 2018 08:27:31 -0400 (0:00:00.044) 0:01:13.527 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"osd_pool_default_pg_num\": \"32\"}, \"changed\": false}", "", "TASK [ceph-mon : test if calamari-server is installed] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:2", "Friday 21 September 2018 08:27:31 -0400 (0:00:00.080) 0:01:13.607 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : increase calamari logging level when debug is on] *************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:18", "Friday 21 September 2018 08:27:31 -0400 (0:00:00.055) 0:01:13.663 ****** ", "skipping: [controller-0] => (item=cthulhu) => {\"changed\": false, \"item\": \"cthulhu\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=calamari_web) => {\"changed\": false, \"item\": \"calamari_web\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : initialize the calamari server api] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:29", "Friday 21 September 2018 08:27:31 -0400 (0:00:00.053) 0:01:13.716 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******", "Friday 21 September 2018 08:27:31 -0400 (0:00:00.017) 0:01:13.734 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mon restart script] **********************", "Friday 21 September 2018 08:27:31 -0400 (0:00:00.073) 0:01:13.808 ****** ", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"83f7af8323e264039a95f266faedb4a665c8f4ca\", \"dest\": \"/tmp/restart_mon_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"a72fe8d7f7ff92960aa2e96a1b3fe152\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_tmp_t:s0\", \"size\": 1398, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532851.77-51911260257588/source\", \"state\": \"file\", \"uid\": 0}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***", "Friday 21 September 2018 08:27:32 -0400 (0:00:00.544) 0:01:14.352 ****** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******", "Friday 21 September 2018 08:27:32 -0400 (0:00:00.094) 0:01:14.446 ****** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********", "Friday 21 September 2018 08:27:32 -0400 (0:00:00.135) 0:01:14.582 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******", "Friday 21 September 2018 08:27:32 -0400 (0:00:00.076) 0:01:14.658 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy osd restart script] **********************", "Friday 21 September 2018 08:27:32 -0400 (0:00:00.079) 0:01:14.737 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***", "Friday 21 September 2018 08:27:32 -0400 (0:00:00.052) 0:01:14.789 ****** ", "skipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******", "Friday 21 September 2018 08:27:32 -0400 (0:00:00.088) 0:01:14.878 ****** ", "skipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********", "Friday 21 September 2018 08:27:32 -0400 (0:00:00.089) 0:01:14.967 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******", "Friday 21 September 2018 08:27:32 -0400 (0:00:00.072) 0:01:15.040 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mds restart script] **********************", "Friday 21 September 2018 08:27:33 -0400 (0:00:00.074) 0:01:15.115 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***", "Friday 21 September 2018 08:27:33 -0400 (0:00:00.049) 0:01:15.165 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******", "Friday 21 September 2018 08:27:33 -0400 (0:00:00.058) 0:01:15.223 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********", "Friday 21 September 2018 08:27:33 -0400 (0:00:00.057) 0:01:15.281 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******", "Friday 21 September 2018 08:27:33 -0400 (0:00:00.073) 0:01:15.354 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************", "Friday 21 September 2018 08:27:33 -0400 (0:00:00.078) 0:01:15.432 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***", "Friday 21 September 2018 08:27:33 -0400 (0:00:00.054) 0:01:15.487 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******", "Friday 21 September 2018 08:27:33 -0400 (0:00:00.064) 0:01:15.551 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********", "Friday 21 September 2018 08:27:33 -0400 (0:00:00.062) 0:01:15.613 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***", "Friday 21 September 2018 08:27:33 -0400 (0:00:00.081) 0:01:15.695 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************", "Friday 21 September 2018 08:27:33 -0400 (0:00:00.083) 0:01:15.778 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***", "Friday 21 September 2018 08:27:33 -0400 (0:00:00.052) 0:01:15.831 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***", "Friday 21 September 2018 08:27:33 -0400 (0:00:00.068) 0:01:15.899 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***", "Friday 21 September 2018 08:27:33 -0400 (0:00:00.064) 0:01:15.964 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******", "Friday 21 September 2018 08:27:33 -0400 (0:00:00.079) 0:01:16.044 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************", "Friday 21 September 2018 08:27:34 -0400 (0:00:00.082) 0:01:16.126 ****** ", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"3b92c07facdbaa789b36f850d92d7444e2bb6a27\", \"dest\": \"/tmp/restart_mgr_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"73c8d33ad2b3c95d77ee4b411e06cae6\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_tmp_t:s0\", \"size\": 843, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532854.1-182050924307964/source\", \"state\": \"file\", \"uid\": 0}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***", "Friday 21 September 2018 08:27:34 -0400 (0:00:00.532) 0:01:16.659 ****** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******", "Friday 21 September 2018 08:27:34 -0400 (0:00:00.100) 0:01:16.760 ****** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********", "Friday 21 September 2018 08:27:34 -0400 (0:00:00.149) 0:01:16.910 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "PLAY [mons] ********************************************************************", "META: ran handlers", "", "TASK [set ceph monitor install 'Complete'] *************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:99", "Friday 21 September 2018 08:27:34 -0400 (0:00:00.121) 0:01:17.031 ****** ", "ok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"end\": \"20180921082734Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "PLAY [mgrs] ********************************************************************", "", "TASK [set ceph manager install 'In Progress'] **********************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:111", "Friday 21 September 2018 08:27:35 -0400 (0:00:00.174) 0:01:17.206 ****** ", "ok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mgr\": {\"start\": \"20180921082735Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [ceph-defaults : check for a mon container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2", "Friday 21 September 2018 08:27:35 -0400 (0:00:00.094) 0:01:17.300 ****** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.025051\", \"end\": \"2018-09-21 12:27:35.404077\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:27:35.379026\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"509b79aaec28\", \"stdout_lines\": [\"509b79aaec28\"]}", "", "TASK [ceph-defaults : check for an osd container] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11", "Friday 21 September 2018 08:27:35 -0400 (0:00:00.265) 0:01:17.566 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mds container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20", "Friday 21 September 2018 08:27:35 -0400 (0:00:00.053) 0:01:17.619 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rgw container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29", "Friday 21 September 2018 08:27:35 -0400 (0:00:00.052) 0:01:17.672 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mgr container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38", "Friday 21 September 2018 08:27:35 -0400 (0:00:00.053) 0:01:17.726 ****** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.023200\", \"end\": \"2018-09-21 12:27:35.820044\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:27:35.796844\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for a rbd mirror container] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47", "Friday 21 September 2018 08:27:35 -0400 (0:00:00.254) 0:01:17.980 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a nfs container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56", "Friday 21 September 2018 08:27:35 -0400 (0:00:00.054) 0:01:18.035 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mon socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2", "Friday 21 September 2018 08:27:35 -0400 (0:00:00.050) 0:01:18.086 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mon socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11", "Friday 21 September 2018 08:27:36 -0400 (0:00:00.050) 0:01:18.137 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21", "Friday 21 September 2018 08:27:36 -0400 (0:00:00.052) 0:01:18.190 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph osd socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30", "Friday 21 September 2018 08:27:36 -0400 (0:00:00.050) 0:01:18.240 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph osd socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40", "Friday 21 September 2018 08:27:36 -0400 (0:00:00.135) 0:01:18.376 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50", "Friday 21 September 2018 08:27:36 -0400 (0:00:00.050) 0:01:18.427 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mds socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59", "Friday 21 September 2018 08:27:36 -0400 (0:00:00.048) 0:01:18.475 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mds socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69", "Friday 21 September 2018 08:27:36 -0400 (0:00:00.049) 0:01:18.524 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79", "Friday 21 September 2018 08:27:36 -0400 (0:00:00.047) 0:01:18.572 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rgw socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88", "Friday 21 September 2018 08:27:36 -0400 (0:00:00.051) 0:01:18.623 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98", "Friday 21 September 2018 08:27:36 -0400 (0:00:00.062) 0:01:18.685 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108", "Friday 21 September 2018 08:27:36 -0400 (0:00:00.050) 0:01:18.736 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mgr socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117", "Friday 21 September 2018 08:27:36 -0400 (0:00:00.049) 0:01:18.785 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127", "Friday 21 September 2018 08:27:36 -0400 (0:00:00.051) 0:01:18.837 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137", "Friday 21 September 2018 08:27:36 -0400 (0:00:00.047) 0:01:18.884 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rbd mirror socket] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146", "Friday 21 September 2018 08:27:36 -0400 (0:00:00.050) 0:01:18.935 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156", "Friday 21 September 2018 08:27:36 -0400 (0:00:00.052) 0:01:18.987 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166", "Friday 21 September 2018 08:27:36 -0400 (0:00:00.050) 0:01:19.037 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175", "Friday 21 September 2018 08:27:36 -0400 (0:00:00.052) 0:01:19.090 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184", "Friday 21 September 2018 08:27:37 -0400 (0:00:00.048) 0:01:19.139 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194", "Friday 21 September 2018 08:27:37 -0400 (0:00:00.048) 0:01:19.187 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if it is atomic host] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2", "Friday 21 September 2018 08:27:37 -0400 (0:00:00.060) 0:01:19.248 ****** ", "ok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact is_atomic] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7", "Friday 21 September 2018 08:27:37 -0400 (0:00:00.230) 0:01:19.479 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11", "Friday 21 September 2018 08:27:37 -0400 (0:00:00.083) 0:01:19.562 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17", "Friday 21 September 2018 08:27:37 -0400 (0:00:00.088) 0:01:19.650 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact docker_exec_cmd] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23", "Friday 21 September 2018 08:27:37 -0400 (0:00:00.082) 0:01:19.732 ****** ", "ok: [controller-0 -> 192.168.24.18] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : is ceph running already?] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34", "Friday 21 September 2018 08:27:37 -0400 (0:00:00.162) 0:01:19.894 ****** ", "ok: [controller-0 -> 192.168.24.18] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"-s\", \"-f\", \"json\"], \"delta\": \"0:00:00.318683\", \"end\": \"2018-09-21 12:27:38.306011\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:27:37.987328\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\\n{\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"modified\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"created\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.16:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.16:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":1,\\\"num_osds\\\":0,\\\"num_up_osds\\\":0,\\\"num_in_osds\\\":0,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[],\\\"num_pgs\\\":0,\\\"num_pools\\\":0,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":0,\\\"bytes_avail\\\":0,\\\"bytes_total\\\":0},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":1,\\\"active_gid\\\":0,\\\"active_name\\\":\\\"\\\",\\\"active_addr\\\":\\\"-\\\",\\\"available\\\":false,\\\"standbys\\\":[],\\\"modules\\\":[\\\"balancer\\\",\\\"restful\\\",\\\"status\\\"],\\\"available_modules\\\":[],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\", \"stdout_lines\": [\"\", \"{\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"modified\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"created\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.16:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.16:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":1,\\\"num_osds\\\":0,\\\"num_up_osds\\\":0,\\\"num_in_osds\\\":0,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[],\\\"num_pgs\\\":0,\\\"num_pools\\\":0,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":0,\\\"bytes_avail\\\":0,\\\"bytes_total\\\":0},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":1,\\\"active_gid\\\":0,\\\"active_name\\\":\\\"\\\",\\\"active_addr\\\":\\\"-\\\",\\\"available\\\":false,\\\"standbys\\\":[],\\\"modules\\\":[\\\"balancer\\\",\\\"restful\\\",\\\"status\\\"],\\\"available_modules\\\":[],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\"]}", "", "TASK [ceph-defaults : check if /var/lib/mistral/overcloud/ceph-ansible/fetch_dir directory exists] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47", "Friday 21 September 2018 08:27:38 -0400 (0:00:00.579) 0:01:20.474 ****** ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact ceph_current_status rc 1] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57", "Friday 21 September 2018 08:27:38 -0400 (0:00:00.198) 0:01:20.673 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : create a local fetch directory if it does not exist] *****", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64", "Friday 21 September 2018 08:27:38 -0400 (0:00:00.056) 0:01:20.729 ****** ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 50, \"state\": \"directory\", \"uid\": 42430}", "", "TASK [ceph-defaults : set_fact ceph_current_status (convert to json)] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74", "Friday 21 September 2018 08:27:38 -0400 (0:00:00.203) 0:01:20.933 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_current_status\": {\"election_epoch\": 3, \"fsid\": \"8fedf068-bd95-11e8-ba69-5254006eda59\", \"fsmap\": {\"by_rank\": [], \"epoch\": 1}, \"health\": {\"checks\": {}, \"overall_status\": \"HEALTH_WARN\", \"status\": \"HEALTH_OK\", \"summary\": [{\"severity\": \"HEALTH_WARN\", \"summary\": \"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\"}]}, \"mgrmap\": {\"active_addr\": \"-\", \"active_gid\": 0, \"active_name\": \"\", \"available\": false, \"available_modules\": [], \"epoch\": 1, \"modules\": [\"balancer\", \"restful\", \"status\"], \"services\": {}, \"standbys\": []}, \"monmap\": {\"created\": \"2018-09-21 12:27:11.445099\", \"epoch\": 1, \"features\": {\"optional\": [], \"persistent\": [\"kraken\", \"luminous\"]}, \"fsid\": \"8fedf068-bd95-11e8-ba69-5254006eda59\", \"modified\": \"2018-09-21 12:27:11.445099\", \"mons\": [{\"addr\": \"172.17.3.16:6789/0\", \"name\": \"controller-0\", \"public_addr\": \"172.17.3.16:6789/0\", \"rank\": 0}]}, \"osdmap\": {\"osdmap\": {\"epoch\": 1, \"full\": false, \"nearfull\": false, \"num_in_osds\": 0, \"num_osds\": 0, \"num_remapped_pgs\": 0, \"num_up_osds\": 0}}, \"pgmap\": {\"bytes_avail\": 0, \"bytes_total\": 0, \"bytes_used\": 0, \"data_bytes\": 0, \"num_objects\": 0, \"num_pgs\": 0, \"num_pools\": 0, \"pgs_by_state\": []}, \"quorum\": [0], \"quorum_names\": [\"controller-0\"], \"servicemap\": {\"epoch\": 1, \"modified\": \"0.000000\", \"services\": {}}}}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact fsid from ceph_current_status] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81", "Friday 21 September 2018 08:27:38 -0400 (0:00:00.097) 0:01:21.030 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"fsid\": \"8fedf068-bd95-11e8-ba69-5254006eda59\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:88", "Friday 21 September 2018 08:27:39 -0400 (0:00:00.093) 0:01:21.124 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}", "", "TASK [ceph-defaults : generate cluster fsid] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:92", "Friday 21 September 2018 08:27:39 -0400 (0:00:00.098) 0:01:21.223 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:103", "Friday 21 September 2018 08:27:39 -0400 (0:00:00.057) 0:01:21.281 ****** ", "changed: [controller-0 -> localhost] => {\"changed\": true, \"cmd\": \"echo 8fedf068-bd95-11e8-ba69-5254006eda59 | tee /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"delta\": \"0:00:00.644982\", \"end\": \"2018-09-21 08:27:39.969836\", \"rc\": 0, \"start\": \"2018-09-21 08:27:39.324854\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"8fedf068-bd95-11e8-ba69-5254006eda59\", \"stdout_lines\": [\"8fedf068-bd95-11e8-ba69-5254006eda59\"]}", "", "TASK [ceph-defaults : read cluster fsid if it already exists] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:112", "Friday 21 September 2018 08:27:40 -0400 (0:00:00.849) 0:01:22.130 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:124", "Friday 21 September 2018 08:27:40 -0400 (0:00:00.061) 0:01:22.192 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:130", "Friday 21 September 2018 08:27:40 -0400 (0:00:00.047) 0:01:22.240 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:136", "Friday 21 September 2018 08:27:40 -0400 (0:00:00.078) 0:01:22.319 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142", "Friday 21 September 2018 08:27:40 -0400 (0:00:00.046) 0:01:22.366 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149", "Friday 21 September 2018 08:27:40 -0400 (0:00:00.047) 0:01:22.413 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156", "Friday 21 September 2018 08:27:40 -0400 (0:00:00.052) 0:01:22.466 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : resolve device link(s)] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:163", "Friday 21 September 2018 08:27:40 -0400 (0:00:00.053) 0:01:22.519 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:173", "Friday 21 September 2018 08:27:40 -0400 (0:00:00.052) 0:01:22.572 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build final devices list] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:182", "Friday 21 September 2018 08:27:40 -0400 (0:00:00.051) 0:01:22.623 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190", "Friday 21 September 2018 08:27:40 -0400 (0:00:00.051) 0:01:22.675 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197", "Friday 21 September 2018 08:27:40 -0400 (0:00:00.054) 0:01:22.730 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204", "Friday 21 September 2018 08:27:40 -0400 (0:00:00.069) 0:01:22.799 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211", "Friday 21 September 2018 08:27:40 -0400 (0:00:00.056) 0:01:22.856 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:218", "Friday 21 September 2018 08:27:40 -0400 (0:00:00.056) 0:01:22.913 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rgw_hostname - fqdn] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:225", "Friday 21 September 2018 08:27:40 -0400 (0:00:00.087) 0:01:23.000 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rgw_hostname - no fqdn] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:235", "Friday 21 September 2018 08:27:40 -0400 (0:00:00.053) 0:01:23.054 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_directories] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2", "Friday 21 September 2018 08:27:40 -0400 (0:00:00.058) 0:01:23.112 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}", "", "TASK [ceph-defaults : create ceph initial directories] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18", "Friday 21 September 2018 08:27:41 -0400 (0:00:00.185) 0:01:23.298 ****** ", "ok: [controller-0] => (item=/etc/ceph) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 160, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/mon) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 31, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/osd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/mds) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 31, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/tmp) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 28, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 35, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/run/ceph) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 60, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-docker-common : fail if systemd is not present] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2", "Friday 21 September 2018 08:27:43 -0400 (0:00:02.199) 0:01:25.497 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2", "Friday 21 September 2018 08:27:43 -0400 (0:00:00.052) 0:01:25.549 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11", "Friday 21 September 2018 08:27:43 -0400 (0:00:00.061) 0:01:25.611 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : warning deprecation for fqdn configuration] *********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:20", "Friday 21 September 2018 08:27:43 -0400 (0:00:00.049) 0:01:25.660 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove ceph udev rules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2", "Friday 21 September 2018 08:27:43 -0400 (0:00:00.048) 0:01:25.709 ****** ", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14", "Friday 21 September 2018 08:27:44 -0400 (0:00:00.526) 0:01:26.236 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20", "Friday 21 September 2018 08:27:44 -0400 (0:00:00.085) 0:01:26.321 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get docker version] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26", "Friday 21 September 2018 08:27:44 -0400 (0:00:00.057) 0:01:26.379 ****** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.025049\", \"end\": \"2018-09-21 12:27:44.481276\", \"rc\": 0, \"start\": \"2018-09-21 12:27:44.456227\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 6e3bb8e/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 6e3bb8e/1.13.1\"]}", "", "TASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32", "Friday 21 September 2018 08:27:44 -0400 (0:00:00.262) 0:01:26.641 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}", "", "TASK [ceph-docker-common : check if a cluster is already running] **************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42", "Friday 21 September 2018 08:27:44 -0400 (0:00:00.087) 0:01:26.728 ****** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.026405\", \"end\": \"2018-09-21 12:27:44.827938\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:27:44.801533\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"509b79aaec28\", \"stdout_lines\": [\"509b79aaec28\"]}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2", "Friday 21 September 2018 08:27:44 -0400 (0:00:00.257) 0:01:26.985 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13", "Friday 21 September 2018 08:27:44 -0400 (0:00:00.056) 0:01:27.042 ****** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20", "Friday 21 September 2018 08:27:44 -0400 (0:00:00.068) 0:01:27.110 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25", "Friday 21 September 2018 08:27:45 -0400 (0:00:00.058) 0:01:27.168 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : stat for ceph config and keys] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30", "Friday 21 September 2018 08:27:45 -0400 (0:00:00.073) 0:01:27.242 ****** ", "skipping: [controller-0] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : fail if we find existing cluster files] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5", "Friday 21 September 2018 08:27:45 -0400 (0:00:00.136) 0:01:27.379 ****** ", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on atomic] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2", "Friday 21 September 2018 08:27:45 -0400 (0:00:00.148) 0:01:27.528 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6", "Friday 21 September 2018 08:27:45 -0400 (0:00:00.048) 0:01:27.577 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on redhat or suse] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2", "Friday 21 September 2018 08:27:45 -0400 (0:00:00.048) 0:01:27.625 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on redhat or suse] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13", "Friday 21 September 2018 08:27:45 -0400 (0:00:00.064) 0:01:27.689 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7", "Friday 21 September 2018 08:27:45 -0400 (0:00:00.057) 0:01:27.746 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on debian] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2", "Friday 21 September 2018 08:27:45 -0400 (0:00:00.055) 0:01:27.802 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on debian] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11", "Friday 21 September 2018 08:27:45 -0400 (0:00:00.049) 0:01:27.852 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7", "Friday 21 September 2018 08:27:45 -0400 (0:00:00.050) 0:01:27.902 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mon container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3", "Friday 21 September 2018 08:27:45 -0400 (0:00:00.052) 0:01:27.955 ****** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"inspect\", \"509b79aaec28\"], \"delta\": \"0:00:00.023674\", \"end\": \"2018-09-21 12:27:46.066714\", \"rc\": 0, \"start\": \"2018-09-21 12:27:46.043040\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"509b79aaec283c68b35bed0f1d02686fa714f6bcce88661dadbdd26eed12504d\\\",\\n \\\"Created\\\": \\\"2018-09-21T12:27:10.49665363Z\\\",\\n \\\"Path\\\": \\\"/entrypoint.sh\\\",\\n \\\"Args\\\": [],\\n \\\"State\\\": {\\n \\\"Status\\\": \\\"running\\\",\\n \\\"Running\\\": true,\\n \\\"Paused\\\": false,\\n \\\"Restarting\\\": false,\\n \\\"OOMKilled\\\": false,\\n \\\"Dead\\\": false,\\n \\\"Pid\\\": 44943,\\n \\\"ExitCode\\\": 0,\\n \\\"Error\\\": \\\"\\\",\\n \\\"StartedAt\\\": \\\"2018-09-21T12:27:10.65506497Z\\\",\\n \\\"FinishedAt\\\": \\\"0001-01-01T00:00:00Z\\\"\\n },\\n \\\"Image\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"ResolvConfPath\\\": \\\"/var/lib/docker/containers/509b79aaec283c68b35bed0f1d02686fa714f6bcce88661dadbdd26eed12504d/resolv.conf\\\",\\n \\\"HostnamePath\\\": \\\"/var/lib/docker/containers/509b79aaec283c68b35bed0f1d02686fa714f6bcce88661dadbdd26eed12504d/hostname\\\",\\n \\\"HostsPath\\\": \\\"/var/lib/docker/containers/509b79aaec283c68b35bed0f1d02686fa714f6bcce88661dadbdd26eed12504d/hosts\\\",\\n \\\"LogPath\\\": \\\"\\\",\\n \\\"Name\\\": \\\"/ceph-mon-controller-0\\\",\\n \\\"RestartCount\\\": 0,\\n \\\"Driver\\\": \\\"overlay2\\\",\\n \\\"MountLabel\\\": \\\"\\\",\\n \\\"ProcessLabel\\\": \\\"\\\",\\n \\\"AppArmorProfile\\\": \\\"\\\",\\n \\\"ExecIDs\\\": null,\\n \\\"HostConfig\\\": {\\n \\\"Binds\\\": [\\n \\\"/var/lib/ceph:/var/lib/ceph:z\\\",\\n \\\"/etc/ceph:/etc/ceph:z\\\",\\n \\\"/var/run/ceph:/var/run/ceph:z\\\",\\n \\\"/etc/localtime:/etc/localtime:ro\\\"\\n ],\\n \\\"ContainerIDFile\\\": \\\"\\\",\\n \\\"LogConfig\\\": {\\n \\\"Type\\\": \\\"journald\\\",\\n \\\"Config\\\": {}\\n },\\n \\\"NetworkMode\\\": \\\"host\\\",\\n \\\"PortBindings\\\": {},\\n \\\"RestartPolicy\\\": {\\n \\\"Name\\\": \\\"no\\\",\\n \\\"MaximumRetryCount\\\": 0\\n },\\n \\\"AutoRemove\\\": true,\\n \\\"VolumeDriver\\\": \\\"\\\",\\n \\\"VolumesFrom\\\": null,\\n \\\"CapAdd\\\": null,\\n \\\"CapDrop\\\": null,\\n \\\"Dns\\\": [],\\n \\\"DnsOptions\\\": [],\\n \\\"DnsSearch\\\": [],\\n \\\"ExtraHosts\\\": null,\\n \\\"GroupAdd\\\": null,\\n \\\"IpcMode\\\": \\\"\\\",\\n \\\"Cgroup\\\": \\\"\\\",\\n \\\"Links\\\": null,\\n \\\"OomScoreAdj\\\": 0,\\n \\\"PidMode\\\": \\\"\\\",\\n \\\"Privileged\\\": false,\\n \\\"PublishAllPorts\\\": false,\\n \\\"ReadonlyRootfs\\\": false,\\n \\\"SecurityOpt\\\": null,\\n \\\"UTSMode\\\": \\\"\\\",\\n \\\"UsernsMode\\\": \\\"\\\",\\n \\\"ShmSize\\\": 67108864,\\n \\\"Runtime\\\": \\\"docker-runc\\\",\\n \\\"ConsoleSize\\\": [\\n 0,\\n 0\\n ],\\n \\\"Isolation\\\": \\\"\\\",\\n \\\"CpuShares\\\": 0,\\n \\\"Memory\\\": 3221225472,\\n \\\"NanoCpus\\\": 0,\\n \\\"CgroupParent\\\": \\\"\\\",\\n \\\"BlkioWeight\\\": 0,\\n \\\"BlkioWeightDevice\\\": null,\\n \\\"BlkioDeviceReadBps\\\": null,\\n \\\"BlkioDeviceWriteBps\\\": null,\\n \\\"BlkioDeviceReadIOps\\\": null,\\n \\\"BlkioDeviceWriteIOps\\\": null,\\n \\\"CpuPeriod\\\": 0,\\n \\\"CpuQuota\\\": 100000,\\n \\\"CpuRealtimePeriod\\\": 0,\\n \\\"CpuRealtimeRuntime\\\": 0,\\n \\\"CpusetCpus\\\": \\\"\\\",\\n \\\"CpusetMems\\\": \\\"\\\",\\n \\\"Devices\\\": [],\\n \\\"DiskQuota\\\": 0,\\n \\\"KernelMemory\\\": 0,\\n \\\"MemoryReservation\\\": 0,\\n \\\"MemorySwap\\\": 6442450944,\\n \\\"MemorySwappiness\\\": -1,\\n \\\"OomKillDisable\\\": false,\\n \\\"PidsLimit\\\": 0,\\n \\\"Ulimits\\\": null,\\n \\\"CpuCount\\\": 0,\\n \\\"CpuPercent\\\": 0,\\n \\\"IOMaximumIOps\\\": 0,\\n \\\"IOMaximumBandwidth\\\": 0\\n },\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/f75ba8cf619ccb639511052485b7795aa6ccc2ee3261bfe40f3db250e2bc173a-init/diff:/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/diff:/var/lib/docker/overlay2/72c93a89fa782b05dc2585f4d22d029cb845c66e38924914071ba886d94bed8c/diff:/var/lib/docker/overlay2/1c6053643a9c6bc0506bbea8ee537d1f921ebdc802eafc5cf82c4566e0c5bbd4/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/f75ba8cf619ccb639511052485b7795aa6ccc2ee3261bfe40f3db250e2bc173a/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/f75ba8cf619ccb639511052485b7795aa6ccc2ee3261bfe40f3db250e2bc173a/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/f75ba8cf619ccb639511052485b7795aa6ccc2ee3261bfe40f3db250e2bc173a/work\\\"\\n }\\n },\\n \\\"Mounts\\\": [\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/var/lib/ceph\\\",\\n \\\"Destination\\\": \\\"/var/lib/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/etc/ceph\\\",\\n \\\"Destination\\\": \\\"/etc/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/var/run/ceph\\\",\\n \\\"Destination\\\": \\\"/var/run/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/etc/localtime\\\",\\n \\\"Destination\\\": \\\"/etc/localtime\\\",\\n \\\"Mode\\\": \\\"ro\\\",\\n \\\"RW\\\": false,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n }\\n ],\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"controller-0\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": true,\\n \\\"AttachStderr\\\": true,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"IP_VERSION=4\\\",\\n \\\"MON_IP=172.17.3.16\\\",\\n \\\"CLUSTER=ceph\\\",\\n \\\"FSID=8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\n \\\"CEPH_PUBLIC_NETWORK=172.17.3.0/24\\\",\\n \\\"CEPH_DAEMON=MON\\\",\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"192.168.24.1:8787/rhceph:3-12\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": null,\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"NetworkSettings\\\": {\\n \\\"Bridge\\\": \\\"\\\",\\n \\\"SandboxID\\\": \\\"b6f9d3e5ebf8f9e507a2edf513fec904d52c3ce4a9dc538930cf89c905ec467c\\\",\\n \\\"HairpinMode\\\": false,\\n \\\"LinkLocalIPv6Address\\\": \\\"\\\",\\n \\\"LinkLocalIPv6PrefixLen\\\": 0,\\n \\\"Ports\\\": {},\\n \\\"SandboxKey\\\": \\\"/var/run/docker/netns/default\\\",\\n \\\"SecondaryIPAddresses\\\": null,\\n \\\"SecondaryIPv6Addresses\\\": null,\\n \\\"EndpointID\\\": \\\"\\\",\\n \\\"Gateway\\\": \\\"\\\",\\n \\\"GlobalIPv6Address\\\": \\\"\\\",\\n \\\"GlobalIPv6PrefixLen\\\": 0,\\n \\\"IPAddress\\\": \\\"\\\",\\n \\\"IPPrefixLen\\\": 0,\\n \\\"IPv6Gateway\\\": \\\"\\\",\\n \\\"MacAddress\\\": \\\"\\\",\\n \\\"Networks\\\": {\\n \\\"host\\\": {\\n \\\"IPAMConfig\\\": null,\\n \\\"Links\\\": null,\\n \\\"Aliases\\\": null,\\n \\\"NetworkID\\\": \\\"cf6351ff3e7cd6c1bb62a77ff0c2ac7bebe8ea9f0d1c52e85ce35ff40982ff70\\\",\\n \\\"EndpointID\\\": \\\"218ed1d48845d795fcd4591f871de6fec3cd8468a7e5188e3eb509498b3a2407\\\",\\n \\\"Gateway\\\": \\\"\\\",\\n \\\"IPAddress\\\": \\\"\\\",\\n \\\"IPPrefixLen\\\": 0,\\n \\\"IPv6Gateway\\\": \\\"\\\",\\n \\\"GlobalIPv6Address\\\": \\\"\\\",\\n \\\"GlobalIPv6PrefixLen\\\": 0,\\n \\\"MacAddress\\\": \\\"\\\"\\n }\\n }\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"509b79aaec283c68b35bed0f1d02686fa714f6bcce88661dadbdd26eed12504d\\\",\", \" \\\"Created\\\": \\\"2018-09-21T12:27:10.49665363Z\\\",\", \" \\\"Path\\\": \\\"/entrypoint.sh\\\",\", \" \\\"Args\\\": [],\", \" \\\"State\\\": {\", \" \\\"Status\\\": \\\"running\\\",\", \" \\\"Running\\\": true,\", \" \\\"Paused\\\": false,\", \" \\\"Restarting\\\": false,\", \" \\\"OOMKilled\\\": false,\", \" \\\"Dead\\\": false,\", \" \\\"Pid\\\": 44943,\", \" \\\"ExitCode\\\": 0,\", \" \\\"Error\\\": \\\"\\\",\", \" \\\"StartedAt\\\": \\\"2018-09-21T12:27:10.65506497Z\\\",\", \" \\\"FinishedAt\\\": \\\"0001-01-01T00:00:00Z\\\"\", \" },\", \" \\\"Image\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"ResolvConfPath\\\": \\\"/var/lib/docker/containers/509b79aaec283c68b35bed0f1d02686fa714f6bcce88661dadbdd26eed12504d/resolv.conf\\\",\", \" \\\"HostnamePath\\\": \\\"/var/lib/docker/containers/509b79aaec283c68b35bed0f1d02686fa714f6bcce88661dadbdd26eed12504d/hostname\\\",\", \" \\\"HostsPath\\\": \\\"/var/lib/docker/containers/509b79aaec283c68b35bed0f1d02686fa714f6bcce88661dadbdd26eed12504d/hosts\\\",\", \" \\\"LogPath\\\": \\\"\\\",\", \" \\\"Name\\\": \\\"/ceph-mon-controller-0\\\",\", \" \\\"RestartCount\\\": 0,\", \" \\\"Driver\\\": \\\"overlay2\\\",\", \" \\\"MountLabel\\\": \\\"\\\",\", \" \\\"ProcessLabel\\\": \\\"\\\",\", \" \\\"AppArmorProfile\\\": \\\"\\\",\", \" \\\"ExecIDs\\\": null,\", \" \\\"HostConfig\\\": {\", \" \\\"Binds\\\": [\", \" \\\"/var/lib/ceph:/var/lib/ceph:z\\\",\", \" \\\"/etc/ceph:/etc/ceph:z\\\",\", \" \\\"/var/run/ceph:/var/run/ceph:z\\\",\", \" \\\"/etc/localtime:/etc/localtime:ro\\\"\", \" ],\", \" \\\"ContainerIDFile\\\": \\\"\\\",\", \" \\\"LogConfig\\\": {\", \" \\\"Type\\\": \\\"journald\\\",\", \" \\\"Config\\\": {}\", \" },\", \" \\\"NetworkMode\\\": \\\"host\\\",\", \" \\\"PortBindings\\\": {},\", \" \\\"RestartPolicy\\\": {\", \" \\\"Name\\\": \\\"no\\\",\", \" \\\"MaximumRetryCount\\\": 0\", \" },\", \" \\\"AutoRemove\\\": true,\", \" \\\"VolumeDriver\\\": \\\"\\\",\", \" \\\"VolumesFrom\\\": null,\", \" \\\"CapAdd\\\": null,\", \" \\\"CapDrop\\\": null,\", \" \\\"Dns\\\": [],\", \" \\\"DnsOptions\\\": [],\", \" \\\"DnsSearch\\\": [],\", \" \\\"ExtraHosts\\\": null,\", \" \\\"GroupAdd\\\": null,\", \" \\\"IpcMode\\\": \\\"\\\",\", \" \\\"Cgroup\\\": \\\"\\\",\", \" \\\"Links\\\": null,\", \" \\\"OomScoreAdj\\\": 0,\", \" \\\"PidMode\\\": \\\"\\\",\", \" \\\"Privileged\\\": false,\", \" \\\"PublishAllPorts\\\": false,\", \" \\\"ReadonlyRootfs\\\": false,\", \" \\\"SecurityOpt\\\": null,\", \" \\\"UTSMode\\\": \\\"\\\",\", \" \\\"UsernsMode\\\": \\\"\\\",\", \" \\\"ShmSize\\\": 67108864,\", \" \\\"Runtime\\\": \\\"docker-runc\\\",\", \" \\\"ConsoleSize\\\": [\", \" 0,\", \" 0\", \" ],\", \" \\\"Isolation\\\": \\\"\\\",\", \" \\\"CpuShares\\\": 0,\", \" \\\"Memory\\\": 3221225472,\", \" \\\"NanoCpus\\\": 0,\", \" \\\"CgroupParent\\\": \\\"\\\",\", \" \\\"BlkioWeight\\\": 0,\", \" \\\"BlkioWeightDevice\\\": null,\", \" \\\"BlkioDeviceReadBps\\\": null,\", \" \\\"BlkioDeviceWriteBps\\\": null,\", \" \\\"BlkioDeviceReadIOps\\\": null,\", \" \\\"BlkioDeviceWriteIOps\\\": null,\", \" \\\"CpuPeriod\\\": 0,\", \" \\\"CpuQuota\\\": 100000,\", \" \\\"CpuRealtimePeriod\\\": 0,\", \" \\\"CpuRealtimeRuntime\\\": 0,\", \" \\\"CpusetCpus\\\": \\\"\\\",\", \" \\\"CpusetMems\\\": \\\"\\\",\", \" \\\"Devices\\\": [],\", \" \\\"DiskQuota\\\": 0,\", \" \\\"KernelMemory\\\": 0,\", \" \\\"MemoryReservation\\\": 0,\", \" \\\"MemorySwap\\\": 6442450944,\", \" \\\"MemorySwappiness\\\": -1,\", \" \\\"OomKillDisable\\\": false,\", \" \\\"PidsLimit\\\": 0,\", \" \\\"Ulimits\\\": null,\", \" \\\"CpuCount\\\": 0,\", \" \\\"CpuPercent\\\": 0,\", \" \\\"IOMaximumIOps\\\": 0,\", \" \\\"IOMaximumBandwidth\\\": 0\", \" },\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/f75ba8cf619ccb639511052485b7795aa6ccc2ee3261bfe40f3db250e2bc173a-init/diff:/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/diff:/var/lib/docker/overlay2/72c93a89fa782b05dc2585f4d22d029cb845c66e38924914071ba886d94bed8c/diff:/var/lib/docker/overlay2/1c6053643a9c6bc0506bbea8ee537d1f921ebdc802eafc5cf82c4566e0c5bbd4/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/f75ba8cf619ccb639511052485b7795aa6ccc2ee3261bfe40f3db250e2bc173a/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/f75ba8cf619ccb639511052485b7795aa6ccc2ee3261bfe40f3db250e2bc173a/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/f75ba8cf619ccb639511052485b7795aa6ccc2ee3261bfe40f3db250e2bc173a/work\\\"\", \" }\", \" },\", \" \\\"Mounts\\\": [\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/var/lib/ceph\\\",\", \" \\\"Destination\\\": \\\"/var/lib/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/etc/ceph\\\",\", \" \\\"Destination\\\": \\\"/etc/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/var/run/ceph\\\",\", \" \\\"Destination\\\": \\\"/var/run/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/etc/localtime\\\",\", \" \\\"Destination\\\": \\\"/etc/localtime\\\",\", \" \\\"Mode\\\": \\\"ro\\\",\", \" \\\"RW\\\": false,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" }\", \" ],\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"controller-0\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": true,\", \" \\\"AttachStderr\\\": true,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"IP_VERSION=4\\\",\", \" \\\"MON_IP=172.17.3.16\\\",\", \" \\\"CLUSTER=ceph\\\",\", \" \\\"FSID=8fedf068-bd95-11e8-ba69-5254006eda59\\\",\", \" \\\"CEPH_PUBLIC_NETWORK=172.17.3.0/24\\\",\", \" \\\"CEPH_DAEMON=MON\\\",\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"192.168.24.1:8787/rhceph:3-12\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": null,\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"NetworkSettings\\\": {\", \" \\\"Bridge\\\": \\\"\\\",\", \" \\\"SandboxID\\\": \\\"b6f9d3e5ebf8f9e507a2edf513fec904d52c3ce4a9dc538930cf89c905ec467c\\\",\", \" \\\"HairpinMode\\\": false,\", \" \\\"LinkLocalIPv6Address\\\": \\\"\\\",\", \" \\\"LinkLocalIPv6PrefixLen\\\": 0,\", \" \\\"Ports\\\": {},\", \" \\\"SandboxKey\\\": \\\"/var/run/docker/netns/default\\\",\", \" \\\"SecondaryIPAddresses\\\": null,\", \" \\\"SecondaryIPv6Addresses\\\": null,\", \" \\\"EndpointID\\\": \\\"\\\",\", \" \\\"Gateway\\\": \\\"\\\",\", \" \\\"GlobalIPv6Address\\\": \\\"\\\",\", \" \\\"GlobalIPv6PrefixLen\\\": 0,\", \" \\\"IPAddress\\\": \\\"\\\",\", \" \\\"IPPrefixLen\\\": 0,\", \" \\\"IPv6Gateway\\\": \\\"\\\",\", \" \\\"MacAddress\\\": \\\"\\\",\", \" \\\"Networks\\\": {\", \" \\\"host\\\": {\", \" \\\"IPAMConfig\\\": null,\", \" \\\"Links\\\": null,\", \" \\\"Aliases\\\": null,\", \" \\\"NetworkID\\\": \\\"cf6351ff3e7cd6c1bb62a77ff0c2ac7bebe8ea9f0d1c52e85ce35ff40982ff70\\\",\", \" \\\"EndpointID\\\": \\\"218ed1d48845d795fcd4591f871de6fec3cd8468a7e5188e3eb509498b3a2407\\\",\", \" \\\"Gateway\\\": \\\"\\\",\", \" \\\"IPAddress\\\": \\\"\\\",\", \" \\\"IPPrefixLen\\\": 0,\", \" \\\"IPv6Gateway\\\": \\\"\\\",\", \" \\\"GlobalIPv6Address\\\": \\\"\\\",\", \" \\\"GlobalIPv6PrefixLen\\\": 0,\", \" \\\"MacAddress\\\": \\\"\\\"\", \" }\", \" }\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : inspect ceph osd container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12", "Friday 21 September 2018 08:27:46 -0400 (0:00:00.287) 0:01:28.243 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mds container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21", "Friday 21 September 2018 08:27:46 -0400 (0:00:00.050) 0:01:28.293 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rgw container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30", "Friday 21 September 2018 08:27:46 -0400 (0:00:00.051) 0:01:28.344 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mgr container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39", "Friday 21 September 2018 08:27:46 -0400 (0:00:00.048) 0:01:28.393 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rbd mirror container] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48", "Friday 21 September 2018 08:27:46 -0400 (0:00:00.069) 0:01:28.463 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph nfs container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57", "Friday 21 September 2018 08:27:46 -0400 (0:00:00.052) 0:01:28.515 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67", "Friday 21 September 2018 08:27:46 -0400 (0:00:00.049) 0:01:28.565 ****** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"inspect\", \"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\"], \"delta\": \"0:00:00.026086\", \"end\": \"2018-09-21 12:27:46.683263\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:27:46.657177\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-12\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 592066185,\\n \\\"VirtualSize\\\": 592066185,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/72c93a89fa782b05dc2585f4d22d029cb845c66e38924914071ba886d94bed8c/diff:/var/lib/docker/overlay2/1c6053643a9c6bc0506bbea8ee537d1f921ebdc802eafc5cf82c4566e0c5bbd4/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\\n \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\\n \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-12\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 592066185,\", \" \\\"VirtualSize\\\": 592066185,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/72c93a89fa782b05dc2585f4d22d029cb845c66e38924914071ba886d94bed8c/diff:/var/lib/docker/overlay2/1c6053643a9c6bc0506bbea8ee537d1f921ebdc802eafc5cf82c4566e0c5bbd4/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\", \" \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\", \" \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76", "Friday 21 September 2018 08:27:46 -0400 (0:00:00.291) 0:01:28.857 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85", "Friday 21 September 2018 08:27:46 -0400 (0:00:00.048) 0:01:28.905 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94", "Friday 21 September 2018 08:27:46 -0400 (0:00:00.057) 0:01:28.962 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103", "Friday 21 September 2018 08:27:46 -0400 (0:00:00.053) 0:01:29.016 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112", "Friday 21 September 2018 08:27:46 -0400 (0:00:00.059) 0:01:29.076 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121", "Friday 21 September 2018 08:27:47 -0400 (0:00:00.057) 0:01:29.133 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130", "Friday 21 September 2018 08:27:47 -0400 (0:00:00.053) 0:01:29.186 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_mon_image_repodigest_before_pulling\": \"sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137", "Friday 21 September 2018 08:27:47 -0400 (0:00:00.097) 0:01:29.284 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144", "Friday 21 September 2018 08:27:47 -0400 (0:00:00.052) 0:01:29.337 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151", "Friday 21 September 2018 08:27:47 -0400 (0:00:00.057) 0:01:29.394 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158", "Friday 21 September 2018 08:27:47 -0400 (0:00:00.056) 0:01:29.451 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165", "Friday 21 September 2018 08:27:47 -0400 (0:00:00.062) 0:01:29.513 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172", "Friday 21 September 2018 08:27:47 -0400 (0:00:00.056) 0:01:29.569 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-12 image] ********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179", "Friday 21 September 2018 08:27:47 -0400 (0:00:00.060) 0:01:29.630 ****** ", "ok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:00.033331\", \"end\": \"2018-09-21 12:27:47.757630\", \"rc\": 0, \"start\": \"2018-09-21 12:27:47.724299\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-12: Pulling from 192.168.24.1:8787/rhceph\\nDigest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\nStatus: Image is up to date for 192.168.24.1:8787/rhceph:3-12\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-12: Pulling from 192.168.24.1:8787/rhceph\", \"Digest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\", \"Status: Image is up to date for 192.168.24.1:8787/rhceph:3-12\"]}", "", "TASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-12 image after pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189", "Friday 21 September 2018 08:27:47 -0400 (0:00:00.295) 0:01:29.925 ****** ", "changed: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:00.026204\", \"end\": \"2018-09-21 12:27:48.027683\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:27:48.001479\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-12\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 592066185,\\n \\\"VirtualSize\\\": 592066185,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/72c93a89fa782b05dc2585f4d22d029cb845c66e38924914071ba886d94bed8c/diff:/var/lib/docker/overlay2/1c6053643a9c6bc0506bbea8ee537d1f921ebdc802eafc5cf82c4566e0c5bbd4/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\\n \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\\n \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-12\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 592066185,\", \" \\\"VirtualSize\\\": 592066185,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/72c93a89fa782b05dc2585f4d22d029cb845c66e38924914071ba886d94bed8c/diff:/var/lib/docker/overlay2/1c6053643a9c6bc0506bbea8ee537d1f921ebdc802eafc5cf82c4566e0c5bbd4/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/647ac35ac16d9dfe175f07dd44786615796abec3ec2955371cd57b2bc31e071d/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\", \" \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\", \" \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194", "Friday 21 September 2018 08:27:48 -0400 (0:00:00.277) 0:01:30.203 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200", "Friday 21 September 2018 08:27:48 -0400 (0:00:00.084) 0:01:30.288 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211", "Friday 21 September 2018 08:27:48 -0400 (0:00:00.061) 0:01:30.349 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222", "Friday 21 September 2018 08:27:48 -0400 (0:00:00.050) 0:01:30.400 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233", "Friday 21 September 2018 08:27:48 -0400 (0:00:00.049) 0:01:30.449 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244", "Friday 21 September 2018 08:27:48 -0400 (0:00:00.053) 0:01:30.503 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255", "Friday 21 September 2018 08:27:48 -0400 (0:00:00.057) 0:01:30.561 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266", "Friday 21 September 2018 08:27:48 -0400 (0:00:00.052) 0:01:30.613 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : export local ceph dev image] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277", "Friday 21 September 2018 08:27:48 -0400 (0:00:00.069) 0:01:30.683 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : copy ceph dev image file] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285", "Friday 21 September 2018 08:27:48 -0400 (0:00:00.053) 0:01:30.736 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : load ceph dev image] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292", "Friday 21 September 2018 08:27:48 -0400 (0:00:00.050) 0:01:30.787 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove tmp ceph dev image file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297", "Friday 21 September 2018 08:27:48 -0400 (0:00:00.050) 0:01:30.837 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get ceph version] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84", "Friday 21 September 2018 08:27:48 -0400 (0:00:00.050) 0:01:30.888 ****** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-12\", \"--version\"], \"delta\": \"0:00:00.458297\", \"end\": \"2018-09-21 12:27:49.529405\", \"rc\": 0, \"start\": \"2018-09-21 12:27:49.071108\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\"]}", "", "TASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90", "Friday 21 September 2018 08:27:49 -0400 (0:00:00.798) 0:01:31.686 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-42.el7cp\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release jewel] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2", "Friday 21 September 2018 08:27:49 -0400 (0:00:00.191) 0:01:31.878 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release kraken] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8", "Friday 21 September 2018 08:27:49 -0400 (0:00:00.059) 0:01:31.938 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release luminous] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14", "Friday 21 September 2018 08:27:49 -0400 (0:00:00.052) 0:01:31.991 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release mimic] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20", "Friday 21 September 2018 08:27:50 -0400 (0:00:00.257) 0:01:32.248 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release nautilus] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26", "Friday 21 September 2018 08:27:50 -0400 (0:00:00.050) 0:01:32.299 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : create bootstrap directories] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2", "Friday 21 September 2018 08:27:50 -0400 (0:00:00.047) 0:01:32.346 ****** ", "changed: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}", "", "TASK [ceph-config : create ceph conf directory] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4", "Friday 21 September 2018 08:27:51 -0400 (0:00:00.921) 0:01:33.268 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate ceph configuration file: ceph.conf] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12", "Friday 21 September 2018 08:27:51 -0400 (0:00:00.062) 0:01:33.331 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : create a local fetch directory if it does not exist] *******", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38", "Friday 21 September 2018 08:27:51 -0400 (0:00:00.057) 0:01:33.388 ****** ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 80, \"state\": \"directory\", \"uid\": 42430}", "", "TASK [ceph-config : generate cluster uuid] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54", "Friday 21 September 2018 08:27:51 -0400 (0:00:00.213) 0:01:33.602 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : read cluster uuid if it already exists] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64", "Friday 21 September 2018 08:27:51 -0400 (0:00:00.059) 0:01:33.662 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : ensure /etc/ceph exists] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76", "Friday 21 September 2018 08:27:51 -0400 (0:00:00.056) 0:01:33.718 ****** ", "changed: [controller-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-config : generate ceph.conf configuration file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84", "Friday 21 September 2018 08:27:51 -0400 (0:00:00.251) 0:01:33.970 ****** ", "ok: [controller-0] => {\"changed\": false, \"checksum\": \"57e5c5d755a630f2e4e9c6766a186478cc210a6a\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"3d1c4a58fc488cca7c5fd19c6454272e\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 1103, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532871.92-238172610611786/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-config : set fsid fact when generate_fsid = true] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102", "Friday 21 September 2018 08:27:52 -0400 (0:00:00.593) 0:01:34.564 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : set_fact docker_exec_cmd] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:2", "Friday 21 September 2018 08:27:52 -0400 (0:00:00.056) 0:01:34.620 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"docker_exec_cmd_mgr\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-mgr : create mgr directory] *****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:2", "Friday 21 September 2018 08:27:52 -0400 (0:00:00.129) 0:01:34.750 ****** ", "ok: [controller-0] => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mgr/ceph-controller-0\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-mgr : copy ceph keyring(s) if needed] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:10", "Friday 21 September 2018 08:27:52 -0400 (0:00:00.254) 0:01:35.004 ****** ", "changed: [controller-0] => (item={u'dest': u'/var/lib/ceph/mgr/ceph-controller-0/keyring', u'name': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'copy_key': True}) => {\"changed\": true, \"checksum\": \"f02fcb991c5a53a3bf474c15b6a514c8356b9c69\", \"dest\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"copy_key\": true, \"dest\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"name\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"md5sum\": \"d7ba913d6ab2c770a0269d55efc01b88\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 67, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532872.95-74123356958813/source\", \"state\": \"file\", \"uid\": 167}", "skipping: [controller-0] => (item={u'dest': u'/etc/ceph/ceph.client.admin.keyring', u'name': u'/etc/ceph/ceph.client.admin.keyring', u'copy_key': False}) => {\"changed\": false, \"item\": {\"copy_key\": false, \"dest\": \"/etc/ceph/ceph.client.admin.keyring\", \"name\": \"/etc/ceph/ceph.client.admin.keyring\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : set mgr key permissions] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:24", "Friday 21 September 2018 08:27:53 -0400 (0:00:00.560) 0:01:35.565 ****** ", "ok: [controller-0] => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 67, \"state\": \"file\", \"uid\": 167}", "", "TASK [ceph-mgr : install ceph-mgr package on RedHat or SUSE] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:2", "Friday 21 September 2018 08:27:53 -0400 (0:00:00.256) 0:01:35.822 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : install ceph mgr for debian] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:9", "Friday 21 September 2018 08:27:53 -0400 (0:00:00.061) 0:01:35.884 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : ensure systemd service override directory exists] *************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:17", "Friday 21 September 2018 08:27:53 -0400 (0:00:00.058) 0:01:35.942 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : add ceph-mgr systemd service overrides] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:25", "Friday 21 September 2018 08:27:53 -0400 (0:00:00.055) 0:01:35.998 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : start and add that the mgr service to the init sequence] ******", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:35", "Friday 21 September 2018 08:27:53 -0400 (0:00:00.052) 0:01:36.051 ****** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : generate systemd unit file] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:2", "Friday 21 September 2018 08:27:53 -0400 (0:00:00.057) 0:01:36.108 ****** ", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mgr restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for controller-0", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"168504b73edc17939666d0ef559eaab44f0382c8\", \"dest\": \"/etc/systemd/system/ceph-mgr@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"35d5093713655bbf808450ce1bb2b512\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 734, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532874.06-126295042630263/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-mgr : systemd start mgr container] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:13", "Friday 21 September 2018 08:27:54 -0400 (0:00:00.873) 0:01:36.982 ****** ", "changed: [controller-0] => {\"changed\": true, \"enabled\": true, \"name\": \"ceph-mgr@controller-0\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"docker.service systemd-journald.socket basic.target system-ceph\\\\x5cx2dmgr.slice\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph Manager\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker run --rm --net=host --memory=1g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro -e CLUSTER=ceph -e CEPH_DAEMON=MGR -e MGR_DASHBOARD=0 --name=ceph-mgr-controller-0 192.168.24.1:8787/rhceph:3-12 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm ceph-mgr-controller-0 ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStopPost\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-mgr-controller-0 ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-mgr@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-mgr@controller-0.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"127798\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"127798\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-mgr@controller-0.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dmgr.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dmgr.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "", "TASK [ceph-mgr : get enabled modules from ceph-mgr] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:19", "Friday 21 September 2018 08:27:55 -0400 (0:00:00.539) 0:01:37.521 ****** ", "changed: [controller-0 -> 192.168.24.18] => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"--format\", \"json\", \"mgr\", \"module\", \"ls\"], \"delta\": \"0:00:00.401929\", \"end\": \"2018-09-21 12:27:56.023830\", \"rc\": 0, \"start\": \"2018-09-21 12:27:55.621901\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\\n{\\\"enabled_modules\\\":[\\\"balancer\\\",\\\"restful\\\",\\\"status\\\"],\\\"disabled_modules\\\":[\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"selftest\\\",\\\"zabbix\\\"]}\", \"stdout_lines\": [\"\", \"{\\\"enabled_modules\\\":[\\\"balancer\\\",\\\"restful\\\",\\\"status\\\"],\\\"disabled_modules\\\":[\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"selftest\\\",\\\"zabbix\\\"]}\"]}", "", "TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:26", "Friday 21 September 2018 08:27:56 -0400 (0:00:00.666) 0:01:38.188 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"_ceph_mgr_modules\": {\"disabled_modules\": [\"dashboard\", \"influx\", \"localpool\", \"prometheus\", \"selftest\", \"zabbix\"], \"enabled_modules\": [\"balancer\", \"restful\", \"status\"]}}, \"changed\": false}", "", "TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:32", "Friday 21 September 2018 08:27:56 -0400 (0:00:00.087) 0:01:38.275 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"_disabled_ceph_mgr_modules\": [\"dashboard\", \"influx\", \"localpool\", \"prometheus\", \"selftest\", \"zabbix\"]}, \"changed\": false}", "", "TASK [ceph-mgr : disable ceph mgr enabled modules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:38", "Friday 21 September 2018 08:27:56 -0400 (0:00:00.127) 0:01:38.402 ****** ", "changed: [controller-0 -> 192.168.24.18] => (item=balancer) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"mgr\", \"module\", \"disable\", \"balancer\"], \"delta\": \"0:00:01.287601\", \"end\": \"2018-09-21 12:27:57.788535\", \"item\": \"balancer\", \"rc\": 0, \"start\": \"2018-09-21 12:27:56.500934\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [controller-0 -> 192.168.24.18] => (item=restful) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"mgr\", \"module\", \"disable\", \"restful\"], \"delta\": \"0:00:00.816448\", \"end\": \"2018-09-21 12:27:58.784407\", \"item\": \"restful\", \"rc\": 0, \"start\": \"2018-09-21 12:27:57.967959\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "skipping: [controller-0] => (item=status) => {\"changed\": false, \"item\": \"status\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : add modules to ceph-mgr] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:49", "Friday 21 September 2018 08:27:58 -0400 (0:00:02.587) 0:01:40.990 ****** ", "skipping: [controller-0] => (item=status) => {\"changed\": false, \"item\": \"status\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******", "Friday 21 September 2018 08:27:58 -0400 (0:00:00.032) 0:01:41.022 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************", "Friday 21 September 2018 08:27:59 -0400 (0:00:00.176) 0:01:41.199 ****** ", "ok: [controller-0] => {\"changed\": false, \"checksum\": \"3b92c07facdbaa789b36f850d92d7444e2bb6a27\", \"dest\": \"/tmp/restart_mgr_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"mode\": \"0750\", \"owner\": \"root\", \"path\": \"/tmp/restart_mgr_daemon.sh\", \"secontext\": \"unconfined_u:object_r:user_tmp_t:s0\", \"size\": 843, \"state\": \"file\", \"uid\": 0}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***", "Friday 21 September 2018 08:27:59 -0400 (0:00:00.593) 0:01:41.793 ****** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******", "Friday 21 September 2018 08:27:59 -0400 (0:00:00.095) 0:01:41.888 ****** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********", "Friday 21 September 2018 08:27:59 -0400 (0:00:00.139) 0:01:42.027 ****** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [set ceph manager install 'Complete'] *************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:130", "Friday 21 September 2018 08:28:00 -0400 (0:00:00.295) 0:01:42.322 ****** ", "ok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mgr\": {\"end\": \"20180921082800Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "PLAY [osds] ********************************************************************", "", "TASK [set ceph osd install 'In Progress'] **************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:142", "Friday 21 September 2018 08:28:00 -0400 (0:00:00.160) 0:01:42.483 ****** ", "ok: [ceph-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_osd\": {\"start\": \"20180921082800Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [ceph-defaults : check for a mon container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2", "Friday 21 September 2018 08:28:00 -0400 (0:00:00.082) 0:01:42.565 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for an osd container] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11", "Friday 21 September 2018 08:28:00 -0400 (0:00:00.044) 0:01:42.610 ****** ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-osd-ceph-0\"], \"delta\": \"0:00:00.030174\", \"end\": \"2018-09-21 12:28:00.725215\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:28:00.695041\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for a mds container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20", "Friday 21 September 2018 08:28:00 -0400 (0:00:00.270) 0:01:42.881 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rgw container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29", "Friday 21 September 2018 08:28:00 -0400 (0:00:00.046) 0:01:42.927 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mgr container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38", "Friday 21 September 2018 08:28:00 -0400 (0:00:00.046) 0:01:42.973 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rbd mirror container] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47", "Friday 21 September 2018 08:28:00 -0400 (0:00:00.050) 0:01:43.024 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a nfs container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56", "Friday 21 September 2018 08:28:00 -0400 (0:00:00.053) 0:01:43.078 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mon socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2", "Friday 21 September 2018 08:28:01 -0400 (0:00:00.053) 0:01:43.131 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mon socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11", "Friday 21 September 2018 08:28:01 -0400 (0:00:00.050) 0:01:43.181 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21", "Friday 21 September 2018 08:28:01 -0400 (0:00:00.048) 0:01:43.229 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph osd socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30", "Friday 21 September 2018 08:28:01 -0400 (0:00:00.047) 0:01:43.277 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph osd socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40", "Friday 21 September 2018 08:28:01 -0400 (0:00:00.049) 0:01:43.327 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50", "Friday 21 September 2018 08:28:01 -0400 (0:00:00.049) 0:01:43.376 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mds socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59", "Friday 21 September 2018 08:28:01 -0400 (0:00:00.052) 0:01:43.428 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mds socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69", "Friday 21 September 2018 08:28:01 -0400 (0:00:00.046) 0:01:43.475 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79", "Friday 21 September 2018 08:28:01 -0400 (0:00:00.045) 0:01:43.521 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rgw socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88", "Friday 21 September 2018 08:28:01 -0400 (0:00:00.044) 0:01:43.565 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98", "Friday 21 September 2018 08:28:01 -0400 (0:00:00.048) 0:01:43.614 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108", "Friday 21 September 2018 08:28:01 -0400 (0:00:00.045) 0:01:43.660 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mgr socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117", "Friday 21 September 2018 08:28:01 -0400 (0:00:00.045) 0:01:43.705 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127", "Friday 21 September 2018 08:28:01 -0400 (0:00:00.046) 0:01:43.752 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137", "Friday 21 September 2018 08:28:01 -0400 (0:00:00.046) 0:01:43.798 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rbd mirror socket] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146", "Friday 21 September 2018 08:28:01 -0400 (0:00:00.045) 0:01:43.844 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156", "Friday 21 September 2018 08:28:01 -0400 (0:00:00.044) 0:01:43.888 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166", "Friday 21 September 2018 08:28:01 -0400 (0:00:00.049) 0:01:43.938 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175", "Friday 21 September 2018 08:28:01 -0400 (0:00:00.045) 0:01:43.983 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184", "Friday 21 September 2018 08:28:01 -0400 (0:00:00.043) 0:01:44.027 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194", "Friday 21 September 2018 08:28:01 -0400 (0:00:00.044) 0:01:44.072 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if it is atomic host] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2", "Friday 21 September 2018 08:28:02 -0400 (0:00:00.047) 0:01:44.119 ****** ", "ok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact is_atomic] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7", "Friday 21 September 2018 08:28:02 -0400 (0:00:00.236) 0:01:44.355 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11", "Friday 21 September 2018 08:28:02 -0400 (0:00:00.079) 0:01:44.435 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"monitor_name\": \"ceph-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17", "Friday 21 September 2018 08:28:02 -0400 (0:00:00.082) 0:01:44.518 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact docker_exec_cmd] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23", "Friday 21 September 2018 08:28:02 -0400 (0:00:00.079) 0:01:44.598 ****** ", "ok: [ceph-0 -> 192.168.24.18] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : is ceph running already?] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34", "Friday 21 September 2018 08:28:02 -0400 (0:00:00.164) 0:01:44.762 ****** ", "ok: [ceph-0 -> 192.168.24.18] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"-s\", \"-f\", \"json\"], \"delta\": \"0:00:00.340932\", \"end\": \"2018-09-21 12:28:03.194717\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:28:02.853785\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\\n{\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"modified\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"created\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.16:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.16:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":1,\\\"num_osds\\\":0,\\\"num_up_osds\\\":0,\\\"num_in_osds\\\":0,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[],\\\"num_pgs\\\":0,\\\"num_pools\\\":0,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":0,\\\"bytes_avail\\\":0,\\\"bytes_total\\\":0},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":7,\\\"active_gid\\\":4104,\\\"active_name\\\":\\\"controller-0\\\",\\\"active_addr\\\":\\\"172.17.3.16:6800/79\\\",\\\"available\\\":true,\\\"standbys\\\":[],\\\"modules\\\":[\\\"status\\\"],\\\"available_modules\\\":[\\\"balancer\\\",\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"restful\\\",\\\"selftest\\\",\\\"status\\\",\\\"zabbix\\\"],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\", \"stdout_lines\": [\"\", \"{\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"modified\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"created\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.16:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.16:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":1,\\\"num_osds\\\":0,\\\"num_up_osds\\\":0,\\\"num_in_osds\\\":0,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[],\\\"num_pgs\\\":0,\\\"num_pools\\\":0,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":0,\\\"bytes_avail\\\":0,\\\"bytes_total\\\":0},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":7,\\\"active_gid\\\":4104,\\\"active_name\\\":\\\"controller-0\\\",\\\"active_addr\\\":\\\"172.17.3.16:6800/79\\\",\\\"available\\\":true,\\\"standbys\\\":[],\\\"modules\\\":[\\\"status\\\"],\\\"available_modules\\\":[\\\"balancer\\\",\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"restful\\\",\\\"selftest\\\",\\\"status\\\",\\\"zabbix\\\"],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\"]}", "", "TASK [ceph-defaults : check if /var/lib/mistral/overcloud/ceph-ansible/fetch_dir directory exists] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47", "Friday 21 September 2018 08:28:03 -0400 (0:00:00.597) 0:01:45.360 ****** ", "ok: [ceph-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact ceph_current_status rc 1] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57", "Friday 21 September 2018 08:28:03 -0400 (0:00:00.197) 0:01:45.557 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : create a local fetch directory if it does not exist] *****", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64", "Friday 21 September 2018 08:28:03 -0400 (0:00:00.053) 0:01:45.611 ****** ", "ok: [ceph-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 80, \"state\": \"directory\", \"uid\": 42430}", "", "TASK [ceph-defaults : set_fact ceph_current_status (convert to json)] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74", "Friday 21 September 2018 08:28:03 -0400 (0:00:00.214) 0:01:45.825 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_current_status\": {\"election_epoch\": 3, \"fsid\": \"8fedf068-bd95-11e8-ba69-5254006eda59\", \"fsmap\": {\"by_rank\": [], \"epoch\": 1}, \"health\": {\"checks\": {}, \"overall_status\": \"HEALTH_WARN\", \"status\": \"HEALTH_OK\", \"summary\": [{\"severity\": \"HEALTH_WARN\", \"summary\": \"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\"}]}, \"mgrmap\": {\"active_addr\": \"172.17.3.16:6800/79\", \"active_gid\": 4104, \"active_name\": \"controller-0\", \"available\": true, \"available_modules\": [\"balancer\", \"dashboard\", \"influx\", \"localpool\", \"prometheus\", \"restful\", \"selftest\", \"status\", \"zabbix\"], \"epoch\": 7, \"modules\": [\"status\"], \"services\": {}, \"standbys\": []}, \"monmap\": {\"created\": \"2018-09-21 12:27:11.445099\", \"epoch\": 1, \"features\": {\"optional\": [], \"persistent\": [\"kraken\", \"luminous\"]}, \"fsid\": \"8fedf068-bd95-11e8-ba69-5254006eda59\", \"modified\": \"2018-09-21 12:27:11.445099\", \"mons\": [{\"addr\": \"172.17.3.16:6789/0\", \"name\": \"controller-0\", \"public_addr\": \"172.17.3.16:6789/0\", \"rank\": 0}]}, \"osdmap\": {\"osdmap\": {\"epoch\": 1, \"full\": false, \"nearfull\": false, \"num_in_osds\": 0, \"num_osds\": 0, \"num_remapped_pgs\": 0, \"num_up_osds\": 0}}, \"pgmap\": {\"bytes_avail\": 0, \"bytes_total\": 0, \"bytes_used\": 0, \"data_bytes\": 0, \"num_objects\": 0, \"num_pgs\": 0, \"num_pools\": 0, \"pgs_by_state\": []}, \"quorum\": [0], \"quorum_names\": [\"controller-0\"], \"servicemap\": {\"epoch\": 1, \"modified\": \"0.000000\", \"services\": {}}}}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact fsid from ceph_current_status] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81", "Friday 21 September 2018 08:28:03 -0400 (0:00:00.083) 0:01:45.909 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"fsid\": \"8fedf068-bd95-11e8-ba69-5254006eda59\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:88", "Friday 21 September 2018 08:28:03 -0400 (0:00:00.073) 0:01:45.983 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}", "", "TASK [ceph-defaults : generate cluster fsid] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:92", "Friday 21 September 2018 08:28:03 -0400 (0:00:00.077) 0:01:46.061 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:103", "Friday 21 September 2018 08:28:03 -0400 (0:00:00.046) 0:01:46.107 ****** ", "ok: [ceph-0 -> localhost] => {\"changed\": false, \"cmd\": \"echo 8fedf068-bd95-11e8-ba69-5254006eda59 | tee /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"rc\": 0, \"stdout\": \"skipped, since /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\", \"stdout_lines\": [\"skipped, since /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\"]}", "", "TASK [ceph-defaults : read cluster fsid if it already exists] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:112", "Friday 21 September 2018 08:28:04 -0400 (0:00:00.198) 0:01:46.305 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:124", "Friday 21 September 2018 08:28:04 -0400 (0:00:00.042) 0:01:46.347 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:130", "Friday 21 September 2018 08:28:04 -0400 (0:00:00.041) 0:01:46.389 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"mds_name\": \"ceph-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:136", "Friday 21 September 2018 08:28:04 -0400 (0:00:00.073) 0:01:46.462 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142", "Friday 21 September 2018 08:28:04 -0400 (0:00:00.041) 0:01:46.504 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149", "Friday 21 September 2018 08:28:04 -0400 (0:00:00.073) 0:01:46.578 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156", "Friday 21 September 2018 08:28:04 -0400 (0:00:00.198) 0:01:46.777 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}", "", "TASK [ceph-defaults : resolve device link(s)] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:163", "Friday 21 September 2018 08:28:04 -0400 (0:00:00.198) 0:01:46.975 ****** ", "ok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdb\"], \"delta\": \"0:00:00.002909\", \"end\": \"2018-09-21 12:28:05.183223\", \"item\": \"/dev/vdb\", \"rc\": 0, \"start\": \"2018-09-21 12:28:05.180314\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdb\", \"stdout_lines\": [\"/dev/vdb\"]}", "ok: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdc\"], \"delta\": \"0:00:00.003139\", \"end\": \"2018-09-21 12:28:05.337201\", \"item\": \"/dev/vdc\", \"rc\": 0, \"start\": \"2018-09-21 12:28:05.334062\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdc\", \"stdout_lines\": [\"/dev/vdc\"]}", "ok: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdd\"], \"delta\": \"0:00:00.003170\", \"end\": \"2018-09-21 12:28:05.485220\", \"item\": \"/dev/vdd\", \"rc\": 0, \"start\": \"2018-09-21 12:28:05.482050\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdd\", \"stdout_lines\": [\"/dev/vdd\"]}", "ok: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vde\"], \"delta\": \"0:00:00.003465\", \"end\": \"2018-09-21 12:28:05.658588\", \"item\": \"/dev/vde\", \"rc\": 0, \"start\": \"2018-09-21 12:28:05.655123\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vde\", \"stdout_lines\": [\"/dev/vde\"]}", "ok: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdf\"], \"delta\": \"0:00:00.003025\", \"end\": \"2018-09-21 12:28:05.811910\", \"item\": \"/dev/vdf\", \"rc\": 0, \"start\": \"2018-09-21 12:28:05.808885\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdf\", \"stdout_lines\": [\"/dev/vdf\"]}", "", "TASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:173", "Friday 21 September 2018 08:28:05 -0400 (0:00:00.989) 0:01:47.965 ****** ", "ok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-09-21 12:28:05.183223', '_ansible_no_log': False, u'stdout': u'/dev/vdb', u'cmd': [u'readlink', u'-f', u'/dev/vdb'], u'rc': 0, 'item': u'/dev/vdb', u'delta': u'0:00:00.002909', '_ansible_item_label': u'/dev/vdb', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdb', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdb'], u'start': u'2018-09-21 12:28:05.180314', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdb\"], \"delta\": \"0:00:00.002909\", \"end\": \"2018-09-21 12:28:05.183223\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdb\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdb\", \"rc\": 0, \"start\": \"2018-09-21 12:28:05.180314\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdb\", \"stdout_lines\": [\"/dev/vdb\"]}}", "ok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-09-21 12:28:05.337201', '_ansible_no_log': False, u'stdout': u'/dev/vdc', u'cmd': [u'readlink', u'-f', u'/dev/vdc'], u'rc': 0, 'item': u'/dev/vdc', u'delta': u'0:00:00.003139', '_ansible_item_label': u'/dev/vdc', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdc', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdc'], u'start': u'2018-09-21 12:28:05.334062', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdc\"], \"delta\": \"0:00:00.003139\", \"end\": \"2018-09-21 12:28:05.337201\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdc\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdc\", \"rc\": 0, \"start\": \"2018-09-21 12:28:05.334062\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdc\", \"stdout_lines\": [\"/dev/vdc\"]}}", "ok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-09-21 12:28:05.485220', '_ansible_no_log': False, u'stdout': u'/dev/vdd', u'cmd': [u'readlink', u'-f', u'/dev/vdd'], u'rc': 0, 'item': u'/dev/vdd', u'delta': u'0:00:00.003170', '_ansible_item_label': u'/dev/vdd', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdd', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdd'], u'start': u'2018-09-21 12:28:05.482050', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdd\"], \"delta\": \"0:00:00.003170\", \"end\": \"2018-09-21 12:28:05.485220\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdd\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdd\", \"rc\": 0, \"start\": \"2018-09-21 12:28:05.482050\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdd\", \"stdout_lines\": [\"/dev/vdd\"]}}", "ok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-09-21 12:28:05.658588', '_ansible_no_log': False, u'stdout': u'/dev/vde', u'cmd': [u'readlink', u'-f', u'/dev/vde'], u'rc': 0, 'item': u'/dev/vde', u'delta': u'0:00:00.003465', '_ansible_item_label': u'/dev/vde', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vde', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vde'], u'start': u'2018-09-21 12:28:05.655123', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vde\"], \"delta\": \"0:00:00.003465\", \"end\": \"2018-09-21 12:28:05.658588\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vde\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vde\", \"rc\": 0, \"start\": \"2018-09-21 12:28:05.655123\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vde\", \"stdout_lines\": [\"/dev/vde\"]}}", "ok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-09-21 12:28:05.811910', '_ansible_no_log': False, u'stdout': u'/dev/vdf', u'cmd': [u'readlink', u'-f', u'/dev/vdf'], u'rc': 0, 'item': u'/dev/vdf', u'delta': u'0:00:00.003025', '_ansible_item_label': u'/dev/vdf', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdf', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdf'], u'start': u'2018-09-21 12:28:05.808885', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdf\"], \"delta\": \"0:00:00.003025\", \"end\": \"2018-09-21 12:28:05.811910\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdf\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdf\", \"rc\": 0, \"start\": \"2018-09-21 12:28:05.808885\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdf\", \"stdout_lines\": [\"/dev/vdf\"]}}", "", "TASK [ceph-defaults : set_fact build final devices list] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:182", "Friday 21 September 2018 08:28:06 -0400 (0:00:00.292) 0:01:48.257 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\"]}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190", "Friday 21 September 2018 08:28:06 -0400 (0:00:00.201) 0:01:48.459 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197", "Friday 21 September 2018 08:28:06 -0400 (0:00:00.046) 0:01:48.505 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204", "Friday 21 September 2018 08:28:06 -0400 (0:00:00.044) 0:01:48.550 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211", "Friday 21 September 2018 08:28:06 -0400 (0:00:00.051) 0:01:48.602 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:218", "Friday 21 September 2018 08:28:06 -0400 (0:00:00.054) 0:01:48.657 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rgw_hostname - fqdn] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:225", "Friday 21 September 2018 08:28:06 -0400 (0:00:00.217) 0:01:48.874 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rgw_hostname - no fqdn] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:235", "Friday 21 September 2018 08:28:06 -0400 (0:00:00.047) 0:01:48.922 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_directories] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2", "Friday 21 September 2018 08:28:06 -0400 (0:00:00.046) 0:01:48.969 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}", "", "TASK [ceph-defaults : create ceph initial directories] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18", "Friday 21 September 2018 08:28:06 -0400 (0:00:00.073) 0:01:49.042 ****** ", "changed: [ceph-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-docker-common : fail if systemd is not present] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2", "Friday 21 September 2018 08:28:09 -0400 (0:00:02.156) 0:01:51.199 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2", "Friday 21 September 2018 08:28:09 -0400 (0:00:00.049) 0:01:51.249 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11", "Friday 21 September 2018 08:28:09 -0400 (0:00:00.047) 0:01:51.297 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : warning deprecation for fqdn configuration] *********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:20", "Friday 21 September 2018 08:28:09 -0400 (0:00:00.047) 0:01:51.344 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove ceph udev rules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2", "Friday 21 September 2018 08:28:09 -0400 (0:00:00.048) 0:01:51.393 ****** ", "ok: [ceph-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}", "ok: [ceph-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14", "Friday 21 September 2018 08:28:09 -0400 (0:00:00.431) 0:01:51.824 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"monitor_name\": \"ceph-0\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20", "Friday 21 September 2018 08:28:09 -0400 (0:00:00.078) 0:01:51.902 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get docker version] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26", "Friday 21 September 2018 08:28:09 -0400 (0:00:00.041) 0:01:51.943 ****** ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.021711\", \"end\": \"2018-09-21 12:28:10.039613\", \"rc\": 0, \"start\": \"2018-09-21 12:28:10.017902\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 6e3bb8e/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 6e3bb8e/1.13.1\"]}", "", "TASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32", "Friday 21 September 2018 08:28:10 -0400 (0:00:00.249) 0:01:52.193 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}", "", "TASK [ceph-docker-common : check if a cluster is already running] **************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42", "Friday 21 September 2018 08:28:10 -0400 (0:00:00.074) 0:01:52.267 ****** ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-ceph-0\"], \"delta\": \"0:00:00.025814\", \"end\": \"2018-09-21 12:28:10.369419\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:28:10.343605\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2", "Friday 21 September 2018 08:28:10 -0400 (0:00:00.256) 0:01:52.524 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13", "Friday 21 September 2018 08:28:10 -0400 (0:00:00.093) 0:01:52.617 ****** ", "ok: [ceph-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20", "Friday 21 September 2018 08:28:10 -0400 (0:00:00.154) 0:01:52.772 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25", "Friday 21 September 2018 08:28:10 -0400 (0:00:00.100) 0:01:52.873 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : stat for ceph config and keys] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30", "Friday 21 September 2018 08:28:10 -0400 (0:00:00.097) 0:01:52.970 ****** ", "ok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"atime\": 1537532848.0440793, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"9e373fe5b7239c71b2c20b1e9dda563cef508b10\", \"ctime\": 1537532848.0440793, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 30664835, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.0440793, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.client.admin.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 159, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}", "ok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"atime\": 1537532848.2170777, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"71985a44f030d17c775335c42962737bc688e6a0\", \"ctime\": 1537532848.2160778, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 30664837, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.2160778, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mon.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 688, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"atime\": 1537532848.3970761, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"64333848b27ab8d9f98e1749b646f53ce8491e92\", \"ctime\": 1537532848.3970761, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 46865184, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.3970761, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-osd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"atime\": 1537532848.5800743, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"ad253570a945c870140d7f94eccef76f44861e59\", \"ctime\": 1537532848.5800743, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 51894543, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.5800743, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"atime\": 1537532848.7600725, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"40b83591ce4be64f55769e0a0d8aca12db95c281\", \"ctime\": 1537532848.7600725, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 55762959, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.7600725, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-mds/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"atime\": 1537532848.9380708, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"cf7920e30e8d8566b8b9f935a5f741908c23465e\", \"ctime\": 1537532848.9380708, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 60028473, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.9380708, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"atime\": 1537532872.9868395, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"f02fcb991c5a53a3bf474c15b6a514c8356b9c69\", \"ctime\": 1537532850.6510544, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 30664838, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532850.6510544, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "", "TASK [ceph-docker-common : fail if we find existing cluster files] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5", "Friday 21 September 2018 08:28:12 -0400 (0:00:01.379) 0:01:54.349 ****** ", "skipping: [ceph-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1537532848.0440793, u'block_size': 4096, u'inode': 30664835, u'isgid': False, u'size': 159, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.client.admin.keyring', u'xusr': False, u'atime': 1537532848.0440793, u'mimetype': u'unknown', u'ctime': 1537532848.0440793, u'isblk': False, u'checksum': u'9e373fe5b7239c71b2c20b1e9dda563cef508b10', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"atime\": 1537532848.0440793, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"9e373fe5b7239c71b2c20b1e9dda563cef508b10\", \"ctime\": 1537532848.0440793, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 30664835, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.0440793, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.client.admin.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 159, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/monmap-ceph'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/monmap-ceph\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1537532848.2160778, u'block_size': 4096, u'inode': 30664837, u'isgid': False, u'size': 688, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mon.keyring', u'xusr': False, u'atime': 1537532848.2170777, u'mimetype': u'unknown', u'ctime': 1537532848.2160778, u'isblk': False, u'checksum': u'71985a44f030d17c775335c42962737bc688e6a0', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"atime\": 1537532848.2170777, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"71985a44f030d17c775335c42962737bc688e6a0\", \"ctime\": 1537532848.2160778, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 30664837, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.2160778, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mon.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 688, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1537532848.3970761, u'block_size': 4096, u'inode': 46865184, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-osd/ceph.keyring', u'xusr': False, u'atime': 1537532848.3970761, u'mimetype': u'unknown', u'ctime': 1537532848.3970761, u'isblk': False, u'checksum': u'64333848b27ab8d9f98e1749b646f53ce8491e92', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"atime\": 1537532848.3970761, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"64333848b27ab8d9f98e1749b646f53ce8491e92\", \"ctime\": 1537532848.3970761, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 46865184, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.3970761, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-osd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1537532848.5800743, u'block_size': 4096, u'inode': 51894543, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'xusr': False, u'atime': 1537532848.5800743, u'mimetype': u'unknown', u'ctime': 1537532848.5800743, u'isblk': False, u'checksum': u'ad253570a945c870140d7f94eccef76f44861e59', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"atime\": 1537532848.5800743, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"ad253570a945c870140d7f94eccef76f44861e59\", \"ctime\": 1537532848.5800743, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 51894543, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.5800743, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1537532848.7600725, u'block_size': 4096, u'inode': 55762959, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-mds/ceph.keyring', u'xusr': False, u'atime': 1537532848.7600725, u'mimetype': u'unknown', u'ctime': 1537532848.7600725, u'isblk': False, u'checksum': u'40b83591ce4be64f55769e0a0d8aca12db95c281', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"atime\": 1537532848.7600725, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"40b83591ce4be64f55769e0a0d8aca12db95c281\", \"ctime\": 1537532848.7600725, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 55762959, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.7600725, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-mds/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1537532848.9380708, u'block_size': 4096, u'inode': 60028473, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'xusr': False, u'atime': 1537532848.9380708, u'mimetype': u'unknown', u'ctime': 1537532848.9380708, u'isblk': False, u'checksum': u'cf7920e30e8d8566b8b9f935a5f741908c23465e', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"atime\": 1537532848.9380708, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"cf7920e30e8d8566b8b9f935a5f741908c23465e\", \"ctime\": 1537532848.9380708, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 60028473, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532848.9380708, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1537532850.6510544, u'block_size': 4096, u'inode': 30664838, u'isgid': False, u'size': 67, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mgr.controller-0.keyring', u'xusr': False, u'atime': 1537532872.9868395, u'mimetype': u'unknown', u'ctime': 1537532850.6510544, u'isblk': False, u'checksum': u'f02fcb991c5a53a3bf474c15b6a514c8356b9c69', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mgr.controller-0.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"atime\": 1537532872.9868395, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"f02fcb991c5a53a3bf474c15b6a514c8356b9c69\", \"ctime\": 1537532850.6510544, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 30664838, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1537532850.6510544, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59//etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on atomic] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2", "Friday 21 September 2018 08:28:12 -0400 (0:00:00.398) 0:01:54.748 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6", "Friday 21 September 2018 08:28:12 -0400 (0:00:00.055) 0:01:54.803 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on redhat or suse] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2", "Friday 21 September 2018 08:28:12 -0400 (0:00:00.050) 0:01:54.854 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on redhat or suse] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13", "Friday 21 September 2018 08:28:12 -0400 (0:00:00.053) 0:01:54.907 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7", "Friday 21 September 2018 08:28:12 -0400 (0:00:00.057) 0:01:54.965 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on debian] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2", "Friday 21 September 2018 08:28:12 -0400 (0:00:00.053) 0:01:55.018 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on debian] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11", "Friday 21 September 2018 08:28:12 -0400 (0:00:00.050) 0:01:55.068 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7", "Friday 21 September 2018 08:28:13 -0400 (0:00:00.052) 0:01:55.121 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mon container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3", "Friday 21 September 2018 08:28:13 -0400 (0:00:00.051) 0:01:55.173 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph osd container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12", "Friday 21 September 2018 08:28:13 -0400 (0:00:00.050) 0:01:55.224 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mds container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21", "Friday 21 September 2018 08:28:13 -0400 (0:00:00.065) 0:01:55.289 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rgw container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30", "Friday 21 September 2018 08:28:13 -0400 (0:00:00.048) 0:01:55.337 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mgr container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39", "Friday 21 September 2018 08:28:13 -0400 (0:00:00.052) 0:01:55.389 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rbd mirror container] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48", "Friday 21 September 2018 08:28:13 -0400 (0:00:00.054) 0:01:55.444 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph nfs container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57", "Friday 21 September 2018 08:28:13 -0400 (0:00:00.047) 0:01:55.491 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67", "Friday 21 September 2018 08:28:13 -0400 (0:00:00.051) 0:01:55.543 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76", "Friday 21 September 2018 08:28:13 -0400 (0:00:00.068) 0:01:55.612 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85", "Friday 21 September 2018 08:28:13 -0400 (0:00:00.058) 0:01:55.670 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94", "Friday 21 September 2018 08:28:13 -0400 (0:00:00.053) 0:01:55.723 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103", "Friday 21 September 2018 08:28:13 -0400 (0:00:00.049) 0:01:55.773 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112", "Friday 21 September 2018 08:28:13 -0400 (0:00:00.053) 0:01:55.827 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121", "Friday 21 September 2018 08:28:13 -0400 (0:00:00.050) 0:01:55.877 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130", "Friday 21 September 2018 08:28:13 -0400 (0:00:00.049) 0:01:55.926 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137", "Friday 21 September 2018 08:28:13 -0400 (0:00:00.045) 0:01:55.972 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144", "Friday 21 September 2018 08:28:13 -0400 (0:00:00.047) 0:01:56.020 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151", "Friday 21 September 2018 08:28:13 -0400 (0:00:00.044) 0:01:56.065 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158", "Friday 21 September 2018 08:28:13 -0400 (0:00:00.045) 0:01:56.110 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165", "Friday 21 September 2018 08:28:14 -0400 (0:00:00.044) 0:01:56.154 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172", "Friday 21 September 2018 08:28:14 -0400 (0:00:00.046) 0:01:56.201 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-12 image] ********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179", "Friday 21 September 2018 08:28:14 -0400 (0:00:00.050) 0:01:56.251 ****** ", "ok: [ceph-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:13.428064\", \"end\": \"2018-09-21 12:28:27.762592\", \"rc\": 0, \"start\": \"2018-09-21 12:28:14.334528\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-12: Pulling from 192.168.24.1:8787/rhceph\\n428a9ca37f0e: Pulling fs layer\\n8115a58d83bd: Pulling fs layer\\n5e409f26eefe: Pulling fs layer\\n8115a58d83bd: Verifying Checksum\\n8115a58d83bd: Download complete\\n428a9ca37f0e: Verifying Checksum\\n428a9ca37f0e: Download complete\\n5e409f26eefe: Verifying Checksum\\n5e409f26eefe: Download complete\\n428a9ca37f0e: Pull complete\\n8115a58d83bd: Pull complete\\n5e409f26eefe: Pull complete\\nDigest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-12: Pulling from 192.168.24.1:8787/rhceph\", \"428a9ca37f0e: Pulling fs layer\", \"8115a58d83bd: Pulling fs layer\", \"5e409f26eefe: Pulling fs layer\", \"8115a58d83bd: Verifying Checksum\", \"8115a58d83bd: Download complete\", \"428a9ca37f0e: Verifying Checksum\", \"428a9ca37f0e: Download complete\", \"5e409f26eefe: Verifying Checksum\", \"5e409f26eefe: Download complete\", \"428a9ca37f0e: Pull complete\", \"8115a58d83bd: Pull complete\", \"5e409f26eefe: Pull complete\", \"Digest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\"]}", "", "TASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-12 image after pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189", "Friday 21 September 2018 08:28:27 -0400 (0:00:13.670) 0:02:09.921 ****** ", "changed: [ceph-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:00.027124\", \"end\": \"2018-09-21 12:28:28.018959\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:28:27.991835\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-12\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 592066185,\\n \\\"VirtualSize\\\": 592066185,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/c1c30ce1dcf2b7db29c713c8a41824356ead2dbe1c9dfd97aa3ee642074fcf4b/diff:/var/lib/docker/overlay2/45f63713c0446d74ff6d3c6aa0b1aa2ab1c61cb75d4ebd421a02603488f56496/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/45fea4bde2d2f33c81b8fe348902856f4ce88b498bac5ed1649ee15ef4a1574d/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/45fea4bde2d2f33c81b8fe348902856f4ce88b498bac5ed1649ee15ef4a1574d/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/45fea4bde2d2f33c81b8fe348902856f4ce88b498bac5ed1649ee15ef4a1574d/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\\n \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\\n \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-12\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 592066185,\", \" \\\"VirtualSize\\\": 592066185,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/c1c30ce1dcf2b7db29c713c8a41824356ead2dbe1c9dfd97aa3ee642074fcf4b/diff:/var/lib/docker/overlay2/45f63713c0446d74ff6d3c6aa0b1aa2ab1c61cb75d4ebd421a02603488f56496/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/45fea4bde2d2f33c81b8fe348902856f4ce88b498bac5ed1649ee15ef4a1574d/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/45fea4bde2d2f33c81b8fe348902856f4ce88b498bac5ed1649ee15ef4a1574d/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/45fea4bde2d2f33c81b8fe348902856f4ce88b498bac5ed1649ee15ef4a1574d/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\", \" \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\", \" \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194", "Friday 21 September 2018 08:28:28 -0400 (0:00:00.373) 0:02:10.295 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200", "Friday 21 September 2018 08:28:28 -0400 (0:00:00.186) 0:02:10.481 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211", "Friday 21 September 2018 08:28:28 -0400 (0:00:00.048) 0:02:10.529 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222", "Friday 21 September 2018 08:28:28 -0400 (0:00:00.054) 0:02:10.584 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233", "Friday 21 September 2018 08:28:28 -0400 (0:00:00.048) 0:02:10.633 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244", "Friday 21 September 2018 08:28:28 -0400 (0:00:00.050) 0:02:10.683 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255", "Friday 21 September 2018 08:28:28 -0400 (0:00:00.053) 0:02:10.736 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266", "Friday 21 September 2018 08:28:28 -0400 (0:00:00.050) 0:02:10.787 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : export local ceph dev image] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277", "Friday 21 September 2018 08:28:28 -0400 (0:00:00.051) 0:02:10.838 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : copy ceph dev image file] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285", "Friday 21 September 2018 08:28:28 -0400 (0:00:00.046) 0:02:10.885 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : load ceph dev image] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292", "Friday 21 September 2018 08:28:28 -0400 (0:00:00.045) 0:02:10.931 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove tmp ceph dev image file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297", "Friday 21 September 2018 08:28:28 -0400 (0:00:00.046) 0:02:10.977 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get ceph version] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84", "Friday 21 September 2018 08:28:28 -0400 (0:00:00.054) 0:02:11.032 ****** ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-12\", \"--version\"], \"delta\": \"0:00:00.427005\", \"end\": \"2018-09-21 12:28:29.637584\", \"rc\": 0, \"start\": \"2018-09-21 12:28:29.210579\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\"]}", "", "TASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90", "Friday 21 September 2018 08:28:29 -0400 (0:00:00.761) 0:02:11.793 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-42.el7cp\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release jewel] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2", "Friday 21 September 2018 08:28:29 -0400 (0:00:00.080) 0:02:11.873 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release kraken] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8", "Friday 21 September 2018 08:28:29 -0400 (0:00:00.048) 0:02:11.922 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release luminous] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14", "Friday 21 September 2018 08:28:29 -0400 (0:00:00.053) 0:02:11.976 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release mimic] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20", "Friday 21 September 2018 08:28:30 -0400 (0:00:00.192) 0:02:12.168 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release nautilus] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26", "Friday 21 September 2018 08:28:30 -0400 (0:00:00.047) 0:02:12.216 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : create bootstrap directories] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2", "Friday 21 September 2018 08:28:30 -0400 (0:00:00.047) 0:02:12.264 ****** ", "changed: [ceph-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "", "TASK [ceph-config : create ceph conf directory] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4", "Friday 21 September 2018 08:28:31 -0400 (0:00:00.945) 0:02:13.209 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate ceph configuration file: ceph.conf] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12", "Friday 21 September 2018 08:28:31 -0400 (0:00:00.051) 0:02:13.260 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : create a local fetch directory if it does not exist] *******", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38", "Friday 21 September 2018 08:28:31 -0400 (0:00:00.052) 0:02:13.312 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate cluster uuid] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54", "Friday 21 September 2018 08:28:31 -0400 (0:00:00.059) 0:02:13.372 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : read cluster uuid if it already exists] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64", "Friday 21 September 2018 08:28:31 -0400 (0:00:00.053) 0:02:13.425 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : ensure /etc/ceph exists] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76", "Friday 21 September 2018 08:28:31 -0400 (0:00:00.054) 0:02:13.480 ****** ", "changed: [ceph-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-config : generate ceph.conf configuration file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84", "Friday 21 September 2018 08:28:31 -0400 (0:00:00.327) 0:02:13.808 ****** ", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy mon restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy osd restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy mds restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy rgw restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy mgr restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for ceph-0", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"405e62fe566533b00313a76f366c912348a265e6\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"f7a4e6d34b91a8adf314d24533355d85\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 1213, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532911.75-113649126647026/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-config : set fsid fact when generate_fsid = true] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102", "Friday 21 September 2018 08:28:33 -0400 (0:00:02.165) 0:02:15.974 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure public_network configured] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:2", "Friday 21 September 2018 08:28:33 -0400 (0:00:00.050) 0:02:16.024 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure cluster_network configured] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:8", "Friday 21 September 2018 08:28:33 -0400 (0:00:00.048) 0:02:16.073 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure journal_size configured] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:15", "Friday 21 September 2018 08:28:34 -0400 (0:00:00.049) 0:02:16.123 ****** ", "ok: [ceph-0] => {", " \"msg\": \"WARNING: journal_size is configured to 512, which is less than 5GB. This is not recommended and can lead to severe issues.\"", "}", "", "TASK [ceph-osd : make sure an osd scenario was chosen] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:23", "Friday 21 September 2018 08:28:34 -0400 (0:00:00.095) 0:02:16.218 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure a valid osd scenario was chosen] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:31", "Friday 21 September 2018 08:28:34 -0400 (0:00:00.052) 0:02:16.271 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : verify devices have been provided] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:39", "Friday 21 September 2018 08:28:34 -0400 (0:00:00.054) 0:02:16.326 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : check if osd_scenario lvm is supported by the selected ceph version] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:49", "Friday 21 September 2018 08:28:34 -0400 (0:00:00.058) 0:02:16.385 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : verify lvm_volumes have been provided] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:59", "Friday 21 September 2018 08:28:34 -0400 (0:00:00.053) 0:02:16.439 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure the lvm_volumes variable is a list] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:69", "Friday 21 September 2018 08:28:34 -0400 (0:00:00.051) 0:02:16.490 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure the devices variable is a list] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:79", "Friday 21 September 2018 08:28:34 -0400 (0:00:00.068) 0:02:16.559 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : verify dedicated devices have been provided] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:88", "Friday 21 September 2018 08:28:34 -0400 (0:00:00.058) 0:02:16.617 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure the dedicated_devices variable is a list] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:98", "Friday 21 September 2018 08:28:34 -0400 (0:00:00.056) 0:02:16.673 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : check if bluestore is supported by the selected ceph version] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:109", "Friday 21 September 2018 08:28:34 -0400 (0:00:00.054) 0:02:16.728 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include system_tuning.yml] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:5", "Friday 21 September 2018 08:28:34 -0400 (0:00:00.059) 0:02:16.787 ****** ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml for ceph-0", "", "TASK [ceph-osd : disable osd directory parsing by updatedb] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:2", "Friday 21 September 2018 08:28:34 -0400 (0:00:00.093) 0:02:16.880 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : disable osd directory path in updatedb.conf] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:11", "Friday 21 September 2018 08:28:34 -0400 (0:00:00.052) 0:02:16.933 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : create tmpfiles.d directory] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:22", "Friday 21 September 2018 08:28:34 -0400 (0:00:00.052) 0:02:16.985 ****** ", "ok: [ceph-0] => {\"changed\": false, \"gid\": 0, \"group\": \"root\", \"mode\": \"0755\", \"owner\": \"root\", \"path\": \"/etc/tmpfiles.d\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 0}", "", "TASK [ceph-osd : disable transparent hugepage] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:33", "Friday 21 September 2018 08:28:35 -0400 (0:00:00.233) 0:02:17.218 ****** ", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"e000059a4cfd8ce350b13f14305a46eaf99849ba\", \"dest\": \"/etc/tmpfiles.d/ceph_transparent_hugepage.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"21ac872f3aa1fb44b01d4f7ab00a35fc\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 158, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532915.28-40963815083215/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-osd : get default vm.min_free_kbytes] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:45", "Friday 21 September 2018 08:28:35 -0400 (0:00:00.651) 0:02:17.870 ****** ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"sysctl\", \"-b\", \"vm.min_free_kbytes\"], \"delta\": \"0:00:00.004872\", \"end\": \"2018-09-21 12:28:36.065392\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:28:36.060520\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"67584\", \"stdout_lines\": [\"67584\"]}", "", "TASK [ceph-osd : set_fact vm_min_free_kbytes] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:52", "Friday 21 September 2018 08:28:36 -0400 (0:00:00.351) 0:02:18.221 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"vm_min_free_kbytes\": \"67584\"}, \"changed\": false}", "", "TASK [ceph-osd : apply operating system tuning] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:56", "Friday 21 September 2018 08:28:36 -0400 (0:00:00.198) 0:02:18.420 ****** ", "changed: [ceph-0] => (item={u'enable': u\"(osd_objectstore == 'bluestore')\", u'name': u'fs.aio-max-nr', u'value': u'1048576'}) => {\"changed\": true, \"item\": {\"enable\": \"(osd_objectstore == 'bluestore')\", \"name\": \"fs.aio-max-nr\", \"value\": \"1048576\"}}", "changed: [ceph-0] => (item={u'name': u'fs.file-max', u'value': 26234859}) => {\"changed\": true, \"item\": {\"name\": \"fs.file-max\", \"value\": 26234859}}", "changed: [ceph-0] => (item={u'name': u'vm.zone_reclaim_mode', u'value': 0}) => {\"changed\": true, \"item\": {\"name\": \"vm.zone_reclaim_mode\", \"value\": 0}}", "changed: [ceph-0] => (item={u'name': u'vm.swappiness', u'value': 10}) => {\"changed\": true, \"item\": {\"name\": \"vm.swappiness\", \"value\": 10}}", "changed: [ceph-0] => (item={u'name': u'vm.min_free_kbytes', u'value': u'67584'}) => {\"changed\": true, \"item\": {\"name\": \"vm.min_free_kbytes\", \"value\": \"67584\"}}", "", "TASK [ceph-osd : install dependencies] *****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:10", "Friday 21 September 2018 08:28:37 -0400 (0:00:01.168) 0:02:19.588 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include common.yml] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:18", "Friday 21 September 2018 08:28:37 -0400 (0:00:00.050) 0:02:19.639 ****** ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml for ceph-0", "", "TASK [ceph-osd : create bootstrap-osd and osd directories] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml:2", "Friday 21 September 2018 08:28:37 -0400 (0:00:00.171) 0:02:19.810 ****** ", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "ok: [ceph-0] => (item=/var/lib/ceph/osd/) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-osd : copy ceph key(s) if needed] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml:15", "Friday 21 September 2018 08:28:38 -0400 (0:00:00.408) 0:02:20.219 ****** ", "changed: [ceph-0] => (item={u'name': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'copy_key': True}) => {\"changed\": true, \"checksum\": \"64333848b27ab8d9f98e1749b646f53ce8491e92\", \"dest\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"copy_key\": true, \"name\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\"}, \"md5sum\": \"d0dcfd5572ae39eb0ce251488182ec1b\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 113, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532918.16-87817898207575/source\", \"state\": \"file\", \"uid\": 167}", "skipping: [ceph-0] => (item={u'name': u'/etc/ceph/ceph.client.admin.keyring', u'copy_key': False}) => {\"changed\": false, \"item\": {\"copy_key\": false, \"name\": \"/etc/ceph/ceph.client.admin.keyring\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --bluestore'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:2", "Friday 21 September 2018 08:28:38 -0400 (0:00:00.534) 0:02:20.753 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options 'ceph_disk_cli_options'] *******", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:11", "Friday 21 September 2018 08:28:38 -0400 (0:00:00.048) 0:02:20.802 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph'] **************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:20", "Friday 21 September 2018 08:28:38 -0400 (0:00:00.069) 0:02:20.871 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --bluestore --dmcrypt'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:29", "Friday 21 September 2018 08:28:38 -0400 (0:00:00.059) 0:02:20.930 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --filestore --dmcrypt'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:38", "Friday 21 September 2018 08:28:38 -0400 (0:00:00.051) 0:02:20.981 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --dmcrypt'] ****", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:47", "Friday 21 September 2018 08:28:38 -0400 (0:00:00.052) 0:02:21.034 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact docker_env_args '-e KV_TYPE=etcd -e KV_IP=127.0.0.1 -e KV_PORT=2379'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:56", "Friday 21 September 2018 08:28:38 -0400 (0:00:00.054) 0:02:21.089 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:62", "Friday 21 September 2018 08:28:39 -0400 (0:00:00.048) 0:02:21.137 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"docker_env_args\": \"-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0\"}, \"changed\": false}", "", "TASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=1'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:70", "Friday 21 September 2018 08:28:39 -0400 (0:00:00.085) 0:02:21.222 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:78", "Friday 21 September 2018 08:28:39 -0400 (0:00:00.053) 0:02:21.275 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=1'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:86", "Friday 21 September 2018 08:28:39 -0400 (0:00:00.046) 0:02:21.322 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact devices generate device list when osd_auto_discovery] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:2", "Friday 21 September 2018 08:28:39 -0400 (0:00:00.048) 0:02:21.370 ****** ", "skipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'20971520', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {u'vda1': {u'sectorsize': 512, u'uuid': u'2018-09-21-08-09-59-00', u'links': {u'masters': [], u'labels': [u'config-2'], u'ids': [], u'uuids': [u'2018-09-21-08-09-59-00']}, u'sectors': u'2048', u'start': u'2048', u'holders': [], u'size': u'1.00 MB'}, u'vda2': {u'sectorsize': 512, u'uuid': u'db072aa5-689e-4872-9a7a-742ec4624465', u'links': {u'masters': [], u'labels': [u'img-rootfs'], u'ids': [], u'uuids': [u'db072aa5-689e-4872-9a7a-742ec4624465']}, u'sectors': u'20967391', u'start': u'4096', u'holders': [], u'size': u'10.00 GB'}}, u'holders': [], u'size': u'10.00 GB'}, 'key': u'vda'}) => {\"changed\": false, \"item\": {\"key\": \"vda\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {\"vda1\": {\"holders\": [], \"links\": {\"ids\": [], \"labels\": [\"config-2\"], \"masters\": [], \"uuids\": [\"2018-09-21-08-09-59-00\"]}, \"sectors\": \"2048\", \"sectorsize\": 512, \"size\": \"1.00 MB\", \"start\": \"2048\", \"uuid\": \"2018-09-21-08-09-59-00\"}, \"vda2\": {\"holders\": [], \"links\": {\"ids\": [], \"labels\": [\"img-rootfs\"], \"masters\": [], \"uuids\": [\"db072aa5-689e-4872-9a7a-742ec4624465\"]}, \"sectors\": \"20967391\", \"sectorsize\": 512, \"size\": \"10.00 GB\", \"start\": \"4096\", \"uuid\": \"db072aa5-689e-4872-9a7a-742ec4624465\"}}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"20971520\", \"sectorsize\": \"512\", \"size\": \"10.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdc'}) => {\"changed\": false, \"item\": {\"key\": \"vdc\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdb'}) => {\"changed\": false, \"item\": {\"key\": \"vdb\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vde'}) => {\"changed\": false, \"item\": {\"key\": \"vde\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdd'}) => {\"changed\": false, \"item\": {\"key\": \"vdd\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdf'}) => {\"changed\": false, \"item\": {\"key\": \"vdf\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : resolve dedicated device link(s)] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:15", "Friday 21 September 2018 08:28:39 -0400 (0:00:00.103) 0:02:21.473 ****** ", "", "TASK [ceph-osd : set_fact build dedicated_devices from resolved symlinks] ******", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:24", "Friday 21 September 2018 08:28:39 -0400 (0:00:00.046) 0:02:21.520 ****** ", "", "TASK [ceph-osd : set_fact build final dedicated_devices list] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:32", "Friday 21 September 2018 08:28:39 -0400 (0:00:00.051) 0:02:21.571 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : read information about the devices] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:29", "Friday 21 September 2018 08:28:39 -0400 (0:00:00.048) 0:02:21.619 ****** ", "ok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}", "ok: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdc\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdc\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}", "ok: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdd\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdd\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}", "ok: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vde\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vde\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}", "ok: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdf\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdf\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}", "", "TASK [ceph-osd : check the partition status of the osd disks] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml:2", "Friday 21 September 2018 08:28:40 -0400 (0:00:01.149) 0:02:22.769 ****** ", "ok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdb\"], \"delta\": \"0:00:00.007086\", \"end\": \"2018-09-21 12:28:40.858573\", \"failed_when_result\": false, \"item\": \"/dev/vdb\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:28:40.851487\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdc\"], \"delta\": \"0:00:00.007443\", \"end\": \"2018-09-21 12:28:41.031081\", \"failed_when_result\": false, \"item\": \"/dev/vdc\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:28:41.023638\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdd\"], \"delta\": \"0:00:00.006857\", \"end\": \"2018-09-21 12:28:41.197648\", \"failed_when_result\": false, \"item\": \"/dev/vdd\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:28:41.190791\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vde\"], \"delta\": \"0:00:00.006872\", \"end\": \"2018-09-21 12:28:41.357694\", \"failed_when_result\": false, \"item\": \"/dev/vde\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:28:41.350822\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdf\"], \"delta\": \"0:00:00.007263\", \"end\": \"2018-09-21 12:28:41.521074\", \"failed_when_result\": false, \"item\": \"/dev/vdf\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:28:41.513811\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : create gpt disk label] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml:11", "Friday 21 September 2018 08:28:41 -0400 (0:00:00.902) 0:02:23.672 ****** ", "ok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdb'], u'end': u'2018-09-21 12:28:40.858573', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdb', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdb', u'delta': u'0:00:00.007086', '_ansible_item_label': u'/dev/vdb', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-09-21 12:28:40.851487', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdb']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdb\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.008661\", \"end\": \"2018-09-21 12:28:41.771110\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdb\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdb\"], \"delta\": \"0:00:00.007086\", \"end\": \"2018-09-21 12:28:40.858573\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdb\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdb\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:28:40.851487\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdb\"], \"rc\": 0, \"start\": \"2018-09-21 12:28:41.762449\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdc'], u'end': u'2018-09-21 12:28:41.031081', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdc', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdc', u'delta': u'0:00:00.007443', '_ansible_item_label': u'/dev/vdc', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-09-21 12:28:41.023638', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdc']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdc\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.008585\", \"end\": \"2018-09-21 12:28:41.951415\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdc\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdc\"], \"delta\": \"0:00:00.007443\", \"end\": \"2018-09-21 12:28:41.031081\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdc\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdc\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:28:41.023638\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdc\"], \"rc\": 0, \"start\": \"2018-09-21 12:28:41.942830\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdd'], u'end': u'2018-09-21 12:28:41.197648', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdd', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdd', u'delta': u'0:00:00.006857', '_ansible_item_label': u'/dev/vdd', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-09-21 12:28:41.190791', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdd']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdd\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.008970\", \"end\": \"2018-09-21 12:28:42.130162\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdd\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdd\"], \"delta\": \"0:00:00.006857\", \"end\": \"2018-09-21 12:28:41.197648\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdd\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdd\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:28:41.190791\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdd\"], \"rc\": 0, \"start\": \"2018-09-21 12:28:42.121192\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vde'], u'end': u'2018-09-21 12:28:41.357694', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vde', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vde', u'delta': u'0:00:00.006872', '_ansible_item_label': u'/dev/vde', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-09-21 12:28:41.350822', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vde']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vde\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.008350\", \"end\": \"2018-09-21 12:28:42.313674\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vde\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vde\"], \"delta\": \"0:00:00.006872\", \"end\": \"2018-09-21 12:28:41.357694\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vde\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vde\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:28:41.350822\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vde\"], \"rc\": 0, \"start\": \"2018-09-21 12:28:42.305324\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdf'], u'end': u'2018-09-21 12:28:41.521074', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdf', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdf', u'delta': u'0:00:00.007263', '_ansible_item_label': u'/dev/vdf', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-09-21 12:28:41.513811', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdf']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdf\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.008257\", \"end\": \"2018-09-21 12:28:42.491823\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdf\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdf\"], \"delta\": \"0:00:00.007263\", \"end\": \"2018-09-21 12:28:41.521074\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdf\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdf\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:28:41.513811\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdf\"], \"rc\": 0, \"start\": \"2018-09-21 12:28:42.483566\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : include scenarios/collocated.yml] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:41", "Friday 21 September 2018 08:28:42 -0400 (0:00:00.985) 0:02:24.657 ****** ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml for ceph-0", "", "TASK [ceph-osd : prepare ceph containerized osd disk collocated] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:5", "Friday 21 September 2018 08:28:42 -0400 (0:00:00.082) 0:02:24.740 ****** ", "changed: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdb', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdb', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdb', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdb', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdb']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdb -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdb -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-12\", \"delta\": \"0:00:06.957095\", \"end\": \"2018-09-21 12:28:49.786537\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdb\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdb\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdb\"], \"rc\": 0, \"start\": \"2018-09-21 12:28:42.829442\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-09-21 12:28:43'\\n+common_functions.sh:13: log(): echo '2018-09-21 12:28:43 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdb ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdb ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdb print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 0b62174e-f684-4a6d-bc2d-fff315b60dee /dev/vdb\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdb\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdb\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:0b62174e-f684-4a6d-bc2d-fff315b60dee --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdb\\nupdate_partition: Calling partprobe on created device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/0b62174e-f684-4a6d-bc2d-fff315b60dee\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdb\\nupdate_partition: Calling partprobe on prepared device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/0b62174e-f684-4a6d-bc2d-fff315b60dee\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdb\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdb\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:ec758399-cbe4-4b08-8b07-b0e37f81e386 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdb\\nupdate_partition: Calling partprobe on created device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdb1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdb1\\nmount: Mounting /dev/vdb1 on /var/lib/ceph/tmp/mnt.FdntmT with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdb1 /var/lib/ceph/tmp/mnt.FdntmT\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.FdntmT\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.FdntmT\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT/ceph_fsid.18899.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT/ceph_fsid.18899.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT/fsid.18899.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT/fsid.18899.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT/magic.18899.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT/magic.18899.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT/journal_uuid.18899.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT/journal_uuid.18899.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.FdntmT/journal -> /dev/disk/by-partuuid/0b62174e-f684-4a6d-bc2d-fff315b60dee\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT/type.18899.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT/type.18899.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.FdntmT\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.FdntmT\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdb\\nupdate_partition: Calling partprobe on prepared device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdb1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb2 ]; do echo '\\\\''Waiting for /dev/vdb2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb1 ]; do echo '\\\\''Waiting for /dev/vdb1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-09-21 12:28:43'\", \"+common_functions.sh:13: log(): echo '2018-09-21 12:28:43 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdb ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdb ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdb print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 0b62174e-f684-4a6d-bc2d-fff315b60dee /dev/vdb\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdb\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdb\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:0b62174e-f684-4a6d-bc2d-fff315b60dee --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdb\", \"update_partition: Calling partprobe on created device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/0b62174e-f684-4a6d-bc2d-fff315b60dee\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdb\", \"update_partition: Calling partprobe on prepared device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/0b62174e-f684-4a6d-bc2d-fff315b60dee\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdb\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdb\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:ec758399-cbe4-4b08-8b07-b0e37f81e386 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdb\", \"update_partition: Calling partprobe on created device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdb1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdb1\", \"mount: Mounting /dev/vdb1 on /var/lib/ceph/tmp/mnt.FdntmT with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdb1 /var/lib/ceph/tmp/mnt.FdntmT\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.FdntmT\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.FdntmT\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT/ceph_fsid.18899.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT/ceph_fsid.18899.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT/fsid.18899.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT/fsid.18899.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT/magic.18899.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT/magic.18899.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT/journal_uuid.18899.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT/journal_uuid.18899.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.FdntmT/journal -> /dev/disk/by-partuuid/0b62174e-f684-4a6d-bc2d-fff315b60dee\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT/type.18899.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT/type.18899.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.FdntmT\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.FdntmT\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.FdntmT\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.FdntmT\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdb\", \"update_partition: Calling partprobe on prepared device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdb1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb2 ]; do echo '\\\\''Waiting for /dev/vdb2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb1 ]; do echo '\\\\''Waiting for /dev/vdb1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-09-21 12:28:43 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-09-21 12:28:43 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-09-21 12:28:43 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-09-21 12:28:43 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdb\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/mon/ceph-ceph-0' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/mds/ceph-ceph-0' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.pqI4DtXAXN' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-rgw' from 64045:64045 to ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-mds' from 64045:64045 to ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-rbd' from 64045:64045 to ceph:ceph\\nchanged ownership of '/var/lib/ceph/mgr' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/mgr/ceph-ceph-0' from root:root to ceph:ceph\\n2018-09-21 12:28:43 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdb1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdb2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdb1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-09-21 12:28:43 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-09-21 12:28:43 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-09-21 12:28:43 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-09-21 12:28:43 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdb\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/mon/ceph-ceph-0' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/mds/ceph-ceph-0' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.pqI4DtXAXN' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-rgw' from 64045:64045 to ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-mds' from 64045:64045 to ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-rbd' from 64045:64045 to ceph:ceph\", \"changed ownership of '/var/lib/ceph/mgr' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/mgr/ceph-ceph-0' from root:root to ceph:ceph\", \"2018-09-21 12:28:43 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdb1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdb2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdb1' from root:disk to ceph:ceph\"]}", "changed: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdc', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdc', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdc', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdc', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdc']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdc -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdc -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-12\", \"delta\": \"0:00:06.708144\", \"end\": \"2018-09-21 12:28:56.666436\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdc\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdc\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdc\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdc\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdc\"], \"rc\": 0, \"start\": \"2018-09-21 12:28:49.958292\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-09-21 12:28:50'\\n+common_functions.sh:13: log(): echo '2018-09-21 12:28:50 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdc ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdc ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdc print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid bf1ed448-2528-4280-b531-ac91f3488886 /dev/vdc\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdc\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdc\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:bf1ed448-2528-4280-b531-ac91f3488886 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdc\\nupdate_partition: Calling partprobe on created device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc2 uuid path is /sys/dev/block/252:34/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/bf1ed448-2528-4280-b531-ac91f3488886\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdc\\nupdate_partition: Calling partprobe on prepared device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/bf1ed448-2528-4280-b531-ac91f3488886\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdc\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdc\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:295d4e75-e479-45c2-9091-c3 >be07a5e1a8 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdc\\nupdate_partition: Calling partprobe on created device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc1 uuid path is /sys/dev/block/252:33/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdc1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdc1\\nmount: Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.NS0aNq with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdc1 /var/lib/ceph/tmp/mnt.NS0aNq\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.NS0aNq\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.NS0aNq\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq/ceph_fsid.19157.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq/ceph_fsid.19157.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq/fsid.19157.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq/fsid.19157.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq/magic.19157.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq/magic.19157.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq/journal_uuid.19157.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq/journal_uuid.19157.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.NS0aNq/journal -> /dev/disk/by-partuuid/bf1ed448-2528-4280-b531-ac91f3488886\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq/type.19157.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq/type.19157.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.NS0aNq\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.NS0aNq\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdc\\nupdate_partition: Calling partprobe on prepared device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdc1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc2 ]; do echo '\\\\''Waiting for /dev/vdc2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc1 ]; do echo '\\\\''Waiting for /dev/vdc1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-09-21 12:28:50'\", \"+common_functions.sh:13: log(): echo '2018-09-21 12:28:50 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdc ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdc ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdc print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid bf1ed448-2528-4280-b531-ac91f3488886 /dev/vdc\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdc\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdc\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:bf1ed448-2528-4280-b531-ac91f3488886 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdc\", \"update_partition: Calling partprobe on created device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc2 uuid path is /sys/dev/block/252:34/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/bf1ed448-2528-4280-b531-ac91f3488886\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdc\", \"update_partition: Calling partprobe on prepared device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/bf1ed448-2528-4280-b531-ac91f3488886\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdc\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdc\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:295d4e75-e479-45c2-9091-c3be07a5e1a8 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdc\", \"update_partition: Calling partprobe on created device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc1 uuid path is /sys/dev/block/252:33/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdc1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdc1\", \"mount: Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.NS0aNq with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdc1 /var/lib/ceph/tmp/mnt.NS0aNq\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.NS0aNq\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.NS0aNq\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq/ceph_fsid.19157.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq/ceph_fsid.19157.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq/fsid.19157.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq/fsid.19157.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq/magic.19157.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq/magic.19157.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq/journal_uuid.19157.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq/journal_uuid.19157.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.NS0aNq/journal -> /dev/disk/by-partuuid/bf1ed448-2528-4280-b531-ac91f3488886\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq/type.19157.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq/type.19157.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.NS0aNq\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.NS0aNq\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.NS0aNq\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.NS0aNq\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdc\", \"update_partition: Calling partprobe on prepared device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdc1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc2 ]; do echo '\\\\''Waiting for /dev/vdc2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc1 ]; do echo '\\\\''Waiting for /dev/vdc1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-09-21 12:28:50 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-09-21 12:28:50 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-09-21 12:28:50 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-09-21 12:28:50 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdc\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.pqI4DtXAXN' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.EDdRQpFCLF' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-09-21 12:28:50 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdc1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdc2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdc1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-09-21 12:28:50 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-09-21 12:28:50 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-09-21 12:28:50 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-09-21 12:28:50 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdc\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.pqI4DtXAXN' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.EDdRQpFCLF' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-09-21 12:28:50 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdc1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdc2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdc1' from root:disk to ceph:ceph\"]}", "changed: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdd', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdd', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdd', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdd', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdd']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdd -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdd -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-12\", \"delta\": \"0:00:07.116273\", \"end\": \"2018-09-21 12:29:03.971219\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdd\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdd\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdd\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdd\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdd\"], \"rc\": 0, \"start\": \"2018-09-21 12:28:56.854946\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-09-21 12:28:57'\\n+common_functions.sh:13: log(): echo '2018-09-21 12:28:57 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdd ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdd ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdd print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 80ccd76c-7139-4f6b-8ec3-da3162342170 /dev/vdd\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdd\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdd\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:80ccd76c-7139-4f6b-8ec3-da3162342170 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdd\\nupdate_partition: Calling partprobe on created device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd2 uuid path is /sys/dev/block/252:50/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/80ccd76c-7139-4f6b-8ec3-da3162342170\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdd\\nupdate_partition: Calling partprobe on prepared device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/80ccd76c-7139-4f6b-8ec3-da3162342170\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdd\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdd\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:da7e3f87-d7d7-4944-a730-14ca919cd237 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdd\\nupdate_partition: Calling partprobe on created device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd1 uuid path is /sys/dev/block/252:49/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdd1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdd1\\nmount: Mounting /dev/vdd1 on /var/lib/ceph/tmp/mnt.KykOSf with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdd1 /var/lib/ceph/tmp/mnt.KykOSf\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.KykOSf\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.KykOSf\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf/ceph_fsid.19414.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf/ceph_fsid.19414.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf/fsid.19414.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf/fsid.19414.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf/magic.19414.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf/magic.19414.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf/journal_uuid.19414.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf/journal_uuid.19414.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.KykOSf/journal -> /dev/disk/by-partuuid/80ccd76c-7139-4f6b-8ec3-da3162342170\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf/type.19414.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf/type.19414.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.KykOSf\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.KykOSf\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdd\\nupdate_partition: Calling partprobe on prepared device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdd1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd2 ]; do echo '\\\\''Waiting for /dev/vdd2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd1 ]; do echo '\\\\''Waiting for /dev/vdd1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-09-21 12:28:57'\", \"+common_functions.sh:13: log(): echo '2018-09-21 12:28:57 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdd ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdd ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdd print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 80ccd76c-7139-4f6b-8ec3-da3162342170 /dev/vdd\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdd\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdd\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:80ccd76c-7139-4f6b-8ec3-da3162342170 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdd\", \"update_partition: Calling partprobe on created device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd2 uuid path is /sys/dev/block/252:50/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/80ccd76c-7139-4f6b-8ec3-da3162342170\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdd\", \"update_partition: Calling partprobe on prepared device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/80ccd76c-7139-4f6b-8ec3-da3162342170\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdd\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdd\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:da7e3f87-d7d7-4944-a730-14ca919cd237 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdd\", \"update_partition: Calling partprobe on created device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd1 uuid path is /sys/dev/block/252:49/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdd1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdd1\", \"mount: Mounting /dev/vdd1 on /var/lib/ceph/tmp/mnt.KykOSf with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdd1 /var/lib/ceph/tmp/mnt.KykOSf\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.KykOSf\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.KykOSf\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf/ceph_fsid.19414.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf/ceph_fsid.19414.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf/fsid.19414.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf/fsid.19414.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf/magic.19414.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf/magic.19414.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf/journal_uuid.19414.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf/journal_uuid.19414.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.KykOSf/journal -> /dev/disk/by-partuuid/80ccd76c-7139-4f6b-8ec3-da3162342170\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf/type.19414.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf/type.19414.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.KykOSf\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.KykOSf\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.KykOSf\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.KykOSf\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdd\", \"update_partition: Calling partprobe on prepared device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdd1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd2 ]; do echo '\\\\''Waiting for /dev/vdd2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd1 ]; do echo '\\\\''Waiting for /dev/vdd1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-09-21 12:28:57 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-09-21 12:28:57 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-09-21 12:28:57 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-09-21 12:28:57 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdd\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.pqI4DtXAXN' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.EDdRQpFCLF' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.YiDfSdGwqv' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-09-21 12:28:57 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdd1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdd2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdd1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-09-21 12:28:57 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-09-21 12:28:57 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-09-21 12:28:57 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-09-21 12:28:57 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdd\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.pqI4DtXAXN' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.EDdRQpFCLF' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.YiDfSdGwqv' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-09-21 12:28:57 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdd1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdd2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdd1' from root:disk to ceph:ceph\"]}", "changed: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vde', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vde', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vde', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vde', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vde']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vde -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vde -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-12\", \"delta\": \"0:00:06.684474\", \"end\": \"2018-09-21 12:29:10.825789\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vde\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vde\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vde\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vde\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vde\"], \"rc\": 0, \"start\": \"2018-09-21 12:29:04.141315\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-09-21 12:29:04'\\n+common_functions.sh:13: log(): echo '2018-09-21 12:29:04 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vde ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vde ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vde print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 8f435e6d-db2f-4a09-b5f4-98704489f743 /dev/vde\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nset_type: Will colocate journal with data on /dev/vde\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vde\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:8f435e6d-db2f-4a09-b5f4-98704489f743 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vde\\nupdate_partition: Calling partprobe on created device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde2 uuid path is /sys/dev/block/252:66/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/8f435e6d-db2f-4a09-b5f4-98704489f743\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vde\\nupdate_partition: Calling partprobe on prepared device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/8f435e6d-db2f-4a09-b5f4-98704489f743\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vde\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vde\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:d07f48e0-7ddf-4490-8619-c5702168e946 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vde\\nupdate_partition: Calling partprobe on created device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde1 uuid path is /sys/dev/block/252:65/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vde1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vde1\\nmount: Mounting /dev/vde1 on /var/lib/ceph/tmp/mnt.Q1AShn with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vde1 /var/lib/ceph/tmp/mnt.Q1AShn\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.Q1AShn\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.Q1AShn\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn/ceph_fsid.19672.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn/ceph_fsid.19672.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn/fsid.19672.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn/fsid.19672.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn/magic.19672.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn/magic.19672.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn/journal_uuid.19672.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn/journal_uuid.19672.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.Q1AShn/journal -> /dev/disk/by-partuuid/8f435e6d-db2f-4a09-b5f4-98704489f743\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn/type.19672.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn/type.19672.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.Q1AShn\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.Q1AShn\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vde\\nupdate_partition: Calling partprobe on prepared device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vde1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde2 ]; do echo '\\\\''Waiting for /dev/vde2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde1 ]; do echo '\\\\''Waiting for /dev/vde1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-09-21 12:29:04'\", \"+common_functions.sh:13: log(): echo '2018-09-21 12:29:04 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vde ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vde ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vde print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 8f435e6d-db2f-4a09-b5f4-98704489f743 /dev/vde\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vde\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vde\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:8f435e6d-db2f-4a09-b5f4-98704489f743 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vde\", \"update_partition: Calling partprobe on created device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde2 uuid path is /sys/dev/block/252:66/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/8f435e6d-db2f-4a09-b5f4-98704489f743\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vde\", \"update_partition: Calling partprobe on prepared device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/8f435e6d-db2f-4a09-b5f4-98704489f743\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vde\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vde\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:d07f48e0-7ddf-4490-8619-c5702168e946 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vde\", \"update_partition: Calling partprobe on created device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde1 uuid path is /sys/dev/block/252:65/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vde1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vde1\", \"mount: Mounting /dev/vde1 on /var/lib/ceph/tmp/mnt.Q1AShn with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vde1 /var/lib/ceph/tmp/mnt.Q1AShn\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.Q1AShn\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.Q1AShn\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn/ceph_fsid.19672.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn/ceph_fsid.19672.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn/fsid.19672.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn/fsid.19672.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn/magic.19672.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn/magic.19672.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn/journal_uuid.19672.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn/journal_uuid.19672.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.Q1AShn/journal -> /dev/disk/by-partuuid/8f435e6d-db2f-4a09-b5f4-98704489f743\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn/type.19672.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn/type.19672.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Q1AShn\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Q1AShn\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.Q1AShn\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.Q1AShn\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vde\", \"update_partition: Calling partprobe on prepared device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vde1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde2 ]; do echo '\\\\''Waiting for /dev/vde2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde1 ]; do echo '\\\\''Waiting for /dev/vde1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-09-21 12:29:04 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-09-21 12:29:04 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-09-21 12:29:04 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-09-21 12:29:04 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vde\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.pqI4DtXAXN' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.EDdRQpFCLF' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.YiDfSdGwqv' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.q2MI4FHwGk' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-09-21 12:29:04 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vde1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vde2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vde1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-09-21 12:29:04 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-09-21 12:29:04 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-09-21 12:29:04 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-09-21 12:29:04 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vde\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.pqI4DtXAXN' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.EDdRQpFCLF' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.YiDfSdGwqv' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.q2MI4FHwGk' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-09-21 12:29:04 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vde1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vde2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vde1' from root:disk to ceph:ceph\"]}", "changed: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdf', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdf', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdf', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdf', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdf']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdf -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdf -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-12\", \"delta\": \"0:00:06.991972\", \"end\": \"2018-09-21 12:29:17.997368\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdf\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdf\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdf\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdf\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdf\"], \"rc\": 0, \"start\": \"2018-09-21 12:29:11.005396\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-09-21 12:29:11'\\n+common_functions.sh:13: log(): echo '2018-09-21 12:29:11 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdf ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdf ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdf print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid f2cb922f-939c-4d73-8d3b-1a56c6c856b7 /dev/vdf\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdf\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdf\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:f2cb922f-939c-4d73-8d3b-1a56c6c856b7 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdf\\nupdate_partition: Calling partprobe on created device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf2 uuid path is /sys/dev/block/252:82/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/f2cb922f-939c-4d73-8d3b-1a56c6c856b7\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdf\\nupdate_partition: Calling partprobe on prepared device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/f2cb922f-939c-4d73-8d3b-1a56c6c856b7\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdf\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdf\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:9489d099-5c06-4e96-85f5-bb30642ff473 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdf\\nupdate_partition: Calling partprobe on created device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf1 uuid path is /sys/dev/block/252:81/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdf1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdf1\\nmount: Mounting /dev/vdf1 on /var/lib/ceph/tmp/mnt.wLZfUa with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdf1 /var/lib/ceph/tmp/mnt.wLZfUa\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.wLZfUa\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.wLZfUa\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa/ceph_fsid.19931.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa/ceph_fsid.19931.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa/fsid.19931.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa/fsid.19931.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa/magic.19931.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa/magic.19931.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa/journal_uuid.19931.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa/journal_uuid.19931.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.wLZfUa/journal -> /dev/disk/by-partuuid/f2cb922f-939c-4d73-8d3b-1a56c6c856b7\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa/type.19931.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa/type.19931.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.wLZfUa\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.wLZfUa\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdf\\nupdate_partition: Calling partprobe on prepared device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdf1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf2 ]; do echo '\\\\''Waiting for /dev/vdf2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf1 ]; do echo '\\\\''Waiting for /dev/vdf1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-09-21 12:29:11'\", \"+common_functions.sh:13: log(): echo '2018-09-21 12:29:11 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdf ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdf ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdf print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid f2cb922f-939c-4d73-8d3b-1a56c6c856b7 /dev/vdf\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdf\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdf\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:f2cb922f-939c-4d73-8d3b-1a56c6c856b7 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdf\", \"update_partition: Calling partprobe on created device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf2 uuid path is /sys/dev/block/252:82/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/f2cb922f-939c-4d73-8d3b-1a56c6c856b7\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdf\", \"update_partition: Calling partprobe on prepared device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/f2cb922f-939c-4d73-8d3b-1a56c6c856b7\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdf\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdf\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:9489d099-5c06-4e96-85f5-bb30642ff473 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdf\", \"update_partition: Calling partprobe on created device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf1 uuid path is /sys/dev/block/252:81/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdf1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdf1\", \"mount: Mounting /dev/vdf1 on /var/lib/ceph/tmp/mnt.wLZfUa with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdf1 /var/lib/ceph/tmp/mnt.wLZfUa\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.wLZfUa\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.wLZfUa\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa/ceph_fsid.19931.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa/ceph_fsid.19931.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa/fsid.19931.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa/fsid.19931.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa/magic.19931.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa/magic.19931.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa/journal_uuid.19931.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa/journal_uuid.19931.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.wLZfUa/journal -> /dev/disk/by-partuuid/f2cb922f-939c-4d73-8d3b-1a56c6c856b7\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa/type.19931.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa/type.19931.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.wLZfUa\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.wLZfUa\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.wLZfUa\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.wLZfUa\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdf\", \"update_partition: Calling partprobe on prepared device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdf1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf2 ]; do echo '\\\\''Waiting for /dev/vdf2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf1 ]; do echo '\\\\''Waiting for /dev/vdf1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-09-21 12:29:11 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-09-21 12:29:11 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-09-21 12:29:11 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-09-21 12:29:11 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdf\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.pqI4DtXAXN' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.EDdRQpFCLF' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.YiDfSdGwqv' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.q2MI4FHwGk' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.L5dDVJyOWZ' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-09-21 12:29:11 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdf1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdf2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdf1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-09-21 12:29:11 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-09-21 12:29:11 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-09-21 12:29:11 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-09-21 12:29:11 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdf\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.pqI4DtXAXN' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.EDdRQpFCLF' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.YiDfSdGwqv' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.q2MI4FHwGk' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.L5dDVJyOWZ' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-09-21 12:29:11 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdf1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdf2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdf1' from root:disk to ceph:ceph\"]}", "", "TASK [ceph-osd : automatic prepare ceph containerized osd disk collocated] *****", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:30", "Friday 21 September 2018 08:29:18 -0400 (0:00:35.433) 0:03:00.173 ****** ", "skipping: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"item\": \"/dev/vdb\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"item\": \"/dev/vdc\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"item\": \"/dev/vdd\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"item\": \"/dev/vde\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"item\": \"/dev/vdf\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : manually prepare ceph \"filestore\" non-containerized osd disk(s) with collocated osd data and journal] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:53", "Friday 21 September 2018 08:29:18 -0400 (0:00:00.076) 0:03:00.249 ****** ", "skipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdb', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdb', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdb', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdb', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdb']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdb\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdb\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdb\"], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdc', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdc', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdc', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdc', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdc']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdc\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdc\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdc\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdc\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdc\"], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdd', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdd', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdd', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdd', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdd']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdd\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdd\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdd\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdd\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdd\"], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vde', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vde', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vde', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vde', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vde']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vde\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vde\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vde\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vde\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vde\"], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdf', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdf', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdf', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdf', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdf']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdf\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdf\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdf\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdf\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdf\"], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include scenarios/non-collocated.yml] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:48", "Friday 21 September 2018 08:29:18 -0400 (0:00:00.116) 0:03:00.365 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include scenarios/lvm.yml] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:56", "Friday 21 September 2018 08:29:18 -0400 (0:00:00.050) 0:03:00.415 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include activate_osds.yml] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:64", "Friday 21 September 2018 08:29:18 -0400 (0:00:00.044) 0:03:00.460 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include start_osds.yml] ***************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:72", "Friday 21 September 2018 08:29:18 -0400 (0:00:00.044) 0:03:00.504 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include docker/main.yml] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:80", "Friday 21 September 2018 08:29:18 -0400 (0:00:00.043) 0:03:00.548 ****** ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/main.yml for ceph-0", "", "TASK [ceph-osd : include start_docker_osd.yml] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/main.yml:2", "Friday 21 September 2018 08:29:18 -0400 (0:00:00.099) 0:03:00.648 ****** ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml for ceph-0", "", "TASK [ceph-osd : umount ceph disk (if on openstack)] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:4", "Friday 21 September 2018 08:29:18 -0400 (0:00:00.079) 0:03:00.727 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : test if the container image has the disk_list function] *******", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:13", "Friday 21 September 2018 08:29:18 -0400 (0:00:00.053) 0:03:00.781 ****** ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint=stat\", \"192.168.24.1:8787/rhceph:3-12\", \"disk_list.sh\"], \"delta\": \"0:00:00.304343\", \"end\": \"2018-09-21 12:29:19.163492\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:29:18.859149\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \" File: 'disk_list.sh'\\n Size: 4074 \\tBlocks: 8 IO Block: 4096 regular file\\nDevice: 2ah/42d\\tInode: 25321704 Links: 1\\nAccess: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)\\nAccess: 2018-08-06 22:27:40.000000000 +0000\\nModify: 2018-08-06 22:27:40.000000000 +0000\\nChange: 2018-09-21 12:28:19.703239788 +0000\\n Birth: -\", \"stdout_lines\": [\" File: 'disk_list.sh'\", \" Size: 4074 \\tBlocks: 8 IO Block: 4096 regular file\", \"Device: 2ah/42d\\tInode: 25321704 Links: 1\", \"Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)\", \"Access: 2018-08-06 22:27:40.000000000 +0000\", \"Modify: 2018-08-06 22:27:40.000000000 +0000\", \"Change: 2018-09-21 12:28:19.703239788 +0000\", \" Birth: -\"]}", "", "TASK [ceph-osd : generate ceph osd docker run script] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:19", "Friday 21 September 2018 08:29:19 -0400 (0:00:00.538) 0:03:01.319 ****** ", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"5542e950125b3dbd25e146575a148538f90dc2a6\", \"dest\": \"/usr/share/ceph-osd-run.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"81913dc490826e0e8f21ed305bd0867e\", \"mode\": \"0744\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:usr_t:s0\", \"size\": 964, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532959.25-114700994477229/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-osd : generate systemd unit file] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:30", "Friday 21 September 2018 08:29:20 -0400 (0:00:00.878) 0:03:02.197 ****** ", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"b7abfb86a4af8d6e54d349965cae96bf9b995c49\", \"dest\": \"/etc/systemd/system/ceph-osd@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"8a53f95e6590750e7c4807589dd5864c\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 496, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532960.27-84518264433441/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-osd : systemd start osd container] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:41", "Friday 21 September 2018 08:29:21 -0400 (0:00:01.066) 0:03:03.264 ****** ", "changed: [ceph-0] => (item=/dev/vdb) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdb\", \"name\": \"ceph-osd@vdb\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"systemd-journald.socket docker.service system-ceph\\\\x5cx2dosd.slice basic.target\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdb.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"22973\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"22973\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdb.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "changed: [ceph-0] => (item=/dev/vdc) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdc\", \"name\": \"ceph-osd@vdc\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"basic.target system-ceph\\\\x5cx2dosd.slice docker.service systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdc.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"22973\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"22973\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdc.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "changed: [ceph-0] => (item=/dev/vdd) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdd\", \"name\": \"ceph-osd@vdd\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"basic.target system-ceph\\\\x5cx2dosd.slice docker.service systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdd.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"22973\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"22973\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdd.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "changed: [ceph-0] => (item=/dev/vde) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vde\", \"name\": \"ceph-osd@vde\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"docker.service systemd-journald.socket basic.target system-ceph\\\\x5cx2dosd.slice\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vde.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"22973\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"22973\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vde.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "changed: [ceph-0] => (item=/dev/vdf) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdf\", \"name\": \"ceph-osd@vdf\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"system-ceph\\\\x5cx2dosd.slice systemd-journald.socket docker.service basic.target\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdf.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"22973\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"22973\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdf.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "", "TASK [ceph-osd : set_fact openstack_keys_tmp - preserve backward compatibility after the introduction of the ceph_keys module] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:87", "Friday 21 September 2018 08:29:24 -0400 (0:00:03.074) 0:03:06.339 ****** ", "skipping: [ceph-0] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==', u'name': u'client.openstack'}) => {\"changed\": false, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==', u'name': u'client.manila'}) => {\"changed\": false, \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==', u'name': u'client.radosgw'}) => {\"changed\": false, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact keys - override keys_tmp with keys] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:95", "Friday 21 September 2018 08:29:24 -0400 (0:00:00.071) 0:03:06.410 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : wait for all osd to be up] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:2", "Friday 21 September 2018 08:29:24 -0400 (0:00:00.069) 0:03:06.479 ****** ", "changed: [ceph-0 -> 192.168.24.18] => {\"attempts\": 1, \"changed\": true, \"cmd\": \"test \\\"$(docker exec ceph-mon-controller-0 ceph --cluster ceph -s -f json | python -c 'import sys, json; print(json.load(sys.stdin)[\\\"osdmap\\\"][\\\"osdmap\\\"][\\\"num_osds\\\"])')\\\" = \\\"$(docker exec ceph-mon-controller-0 ceph --cluster ceph -s -f json | python -c 'import sys, json; print(json.load(sys.stdin)[\\\"osdmap\\\"][\\\"osdmap\\\"][\\\"num_up_osds\\\"])')\\\"\", \"delta\": \"0:00:00.807354\", \"end\": \"2018-09-21 12:29:25.469869\", \"rc\": 0, \"start\": \"2018-09-21 12:29:24.662515\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : list existing pool(s)] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:12", "Friday 21 September 2018 08:29:25 -0400 (0:00:01.194) 0:03:07.674 ****** ", "changed: [ceph-0 -> 192.168.24.18] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"images\", \"size\"], \"delta\": \"0:00:00.373530\", \"end\": \"2018-09-21 12:29:26.209812\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:29:25.836282\", \"stderr\": \"Error ENOENT: unrecognized pool 'images'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.18] => (item={u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"metrics\", \"size\"], \"delta\": \"0:00:00.395786\", \"end\": \"2018-09-21 12:29:26.822972\", \"failed_when_result\": false, \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:29:26.427186\", \"stderr\": \"Error ENOENT: unrecognized pool 'metrics'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.18] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"backups\", \"size\"], \"delta\": \"0:00:00.346406\", \"end\": \"2018-09-21 12:29:27.374763\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:29:27.028357\", \"stderr\": \"Error ENOENT: unrecognized pool 'backups'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.18] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"vms\", \"size\"], \"delta\": \"0:00:00.341163\", \"end\": \"2018-09-21 12:29:27.916597\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:29:27.575434\", \"stderr\": \"Error ENOENT: unrecognized pool 'vms'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.18] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"volumes\", \"size\"], \"delta\": \"0:00:00.378386\", \"end\": \"2018-09-21 12:29:28.507274\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:29:28.128888\", \"stderr\": \"Error ENOENT: unrecognized pool 'volumes'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : set_fact rule_name before luminous] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:21", "Friday 21 September 2018 08:29:28 -0400 (0:00:02.999) 0:03:10.674 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact rule_name from luminous] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:28", "Friday 21 September 2018 08:29:28 -0400 (0:00:00.057) 0:03:10.731 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"rule_name\": \"replicated_rule\"}, \"changed\": false}", "", "TASK [ceph-osd : create openstack pool(s)] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:35", "Friday 21 September 2018 08:29:28 -0400 (0:00:00.137) 0:03:10.868 ****** ", "ok: [ceph-0 -> 192.168.24.18] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u'replicated_rule'}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'images'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'images', u'size'], u'end': u'2018-09-21 12:29:26.209812', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.18'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get images size', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, u'stdout': u'', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u'replicated_rule'}, u'delta': u'0:00:00.373530', '_ansible_item_label': {u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u'replicated_rule'}, u'stderr': u\"Error ENOENT: unrecognized pool 'images'\", u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-09-21 12:29:25.836282', '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"images\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:01.252598\", \"end\": \"2018-09-21 12:29:30.267578\", \"item\": [{\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.18\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"images\", \"size\"], \"delta\": \"0:00:00.373530\", \"end\": \"2018-09-21 12:29:26.209812\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get images size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:29:25.836282\", \"stderr\": \"Error ENOENT: unrecognized pool 'images'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-09-21 12:29:29.014980\", \"stderr\": \"pool 'images' created\", \"stderr_lines\": [\"pool 'images' created\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.18] => (item=[{u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u'replicated_rule'}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'metrics'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'metrics', u'size'], u'end': u'2018-09-21 12:29:26.822972', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.18'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get metrics size', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, u'stdout': u'', 'item': {u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u'replicated_rule'}, u'delta': u'0:00:00.395786', '_ansible_item_label': {u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u'replicated_rule'}, u'stderr': u\"Error ENOENT: unrecognized pool 'metrics'\", u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-09-21 12:29:26.427186', '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"metrics\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:00.955150\", \"end\": \"2018-09-21 12:29:31.438680\", \"item\": [{\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.18\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"metrics\", \"size\"], \"delta\": \"0:00:00.395786\", \"end\": \"2018-09-21 12:29:26.822972\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get metrics size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:29:26.427186\", \"stderr\": \"Error ENOENT: unrecognized pool 'metrics'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-09-21 12:29:30.483530\", \"stderr\": \"pool 'metrics' created\", \"stderr_lines\": [\"pool 'metrics' created\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.18] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u'replicated_rule'}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'backups'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'backups', u'size'], u'end': u'2018-09-21 12:29:27.374763', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.18'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get backups size', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, u'stdout': u'', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u'replicated_rule'}, u'delta': u'0:00:00.346406', '_ansible_item_label': {u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u'replicated_rule'}, u'stderr': u\"Error ENOENT: unrecognized pool 'backups'\", u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-09-21 12:29:27.028357', '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"backups\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:00.989689\", \"end\": \"2018-09-21 12:29:32.635602\", \"item\": [{\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.18\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"backups\", \"size\"], \"delta\": \"0:00:00.346406\", \"end\": \"2018-09-21 12:29:27.374763\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get backups size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:29:27.028357\", \"stderr\": \"Error ENOENT: unrecognized pool 'backups'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-09-21 12:29:31.645913\", \"stderr\": \"pool 'backups' created\", \"stderr_lines\": [\"pool 'backups' created\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.18] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u'replicated_rule'}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'vms'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'vms', u'size'], u'end': u'2018-09-21 12:29:27.916597', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.18'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get vms size', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, u'stdout': u'', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u'replicated_rule'}, u'delta': u'0:00:00.341163', '_ansible_item_label': {u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u'replicated_rule'}, u'stderr': u\"Error ENOENT: unrecognized pool 'vms'\", u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-09-21 12:29:27.575434', '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"vms\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:01.094027\", \"end\": \"2018-09-21 12:29:33.973841\", \"item\": [{\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.18\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"vms\", \"size\"], \"delta\": \"0:00:00.341163\", \"end\": \"2018-09-21 12:29:27.916597\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get vms size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:29:27.575434\", \"stderr\": \"Error ENOENT: unrecognized pool 'vms'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-09-21 12:29:32.879814\", \"stderr\": \"pool 'vms' created\", \"stderr_lines\": [\"pool 'vms' created\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.18] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u'replicated_rule'}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'volumes'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'volumes', u'size'], u'end': u'2018-09-21 12:29:28.507274', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.18'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get volumes size', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, u'stdout': u'', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u'replicated_rule'}, u'delta': u'0:00:00.378386', '_ansible_item_label': {u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u'replicated_rule'}, u'stderr': u\"Error ENOENT: unrecognized pool 'volumes'\", u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-09-21 12:29:28.128888', '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"volumes\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:01.135967\", \"end\": \"2018-09-21 12:29:35.334959\", \"item\": [{\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.18\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"volumes\", \"size\"], \"delta\": \"0:00:00.378386\", \"end\": \"2018-09-21 12:29:28.507274\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get volumes size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-09-21 12:29:28.128888\", \"stderr\": \"Error ENOENT: unrecognized pool 'volumes'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-09-21 12:29:34.198992\", \"stderr\": \"pool 'volumes' created\", \"stderr_lines\": [\"pool 'volumes' created\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : assign application to pool(s)] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:55", "Friday 21 September 2018 08:29:35 -0400 (0:00:06.695) 0:03:17.564 ****** ", "ok: [ceph-0 -> 192.168.24.18] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u'replicated_rule'}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"images\", \"rbd\"], \"delta\": \"0:00:00.783800\", \"end\": \"2018-09-21 12:29:36.499330\", \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"rc\": 0, \"start\": \"2018-09-21 12:29:35.715530\", \"stderr\": \"enabled application 'rbd' on pool 'images'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.18] => (item={u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u'replicated_rule'}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"metrics\", \"openstack_gnocchi\"], \"delta\": \"0:00:00.787285\", \"end\": \"2018-09-21 12:29:37.507380\", \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"rc\": 0, \"start\": \"2018-09-21 12:29:36.720095\", \"stderr\": \"enabled application 'openstack_gnocchi' on pool 'metrics'\", \"stderr_lines\": [\"enabled application 'openstack_gnocchi' on pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.18] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u'replicated_rule'}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"backups\", \"rbd\"], \"delta\": \"0:00:00.797293\", \"end\": \"2018-09-21 12:29:38.503833\", \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"rc\": 0, \"start\": \"2018-09-21 12:29:37.706540\", \"stderr\": \"enabled application 'rbd' on pool 'backups'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.18] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u'replicated_rule'}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"vms\", \"rbd\"], \"delta\": \"0:00:00.799475\", \"end\": \"2018-09-21 12:29:39.498919\", \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"rc\": 0, \"start\": \"2018-09-21 12:29:38.699444\", \"stderr\": \"enabled application 'rbd' on pool 'vms'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.18] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u'replicated_rule'}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"volumes\", \"rbd\"], \"delta\": \"0:00:00.805109\", \"end\": \"2018-09-21 12:29:40.523665\", \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"rc\": 0, \"start\": \"2018-09-21 12:29:39.718556\", \"stderr\": \"enabled application 'rbd' on pool 'volumes'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : create openstack cephx key(s)] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:64", "Friday 21 September 2018 08:29:40 -0400 (0:00:05.152) 0:03:22.716 ****** ", "changed: [ceph-0 -> 192.168.24.18] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==', u'name': u'client.openstack'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.openstack.keyring\"], \"delta\": \"0:00:00.825822\", \"end\": \"2018-09-21 12:29:41.920206\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"rc\": 0, \"start\": \"2018-09-21 12:29:41.094384\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.18] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==', u'name': u'client.manila'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.manila.keyring\"], \"delta\": \"0:00:00.890275\", \"end\": \"2018-09-21 12:29:43.020506\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"rc\": 0, \"start\": \"2018-09-21 12:29:42.130231\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.18] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==', u'name': u'client.radosgw'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.radosgw.keyring\"], \"delta\": \"0:00:00.835276\", \"end\": \"2018-09-21 12:29:44.057574\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"rc\": 0, \"start\": \"2018-09-21 12:29:43.222298\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : fetch openstack cephx key(s)] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:77", "Friday 21 September 2018 08:29:44 -0400 (0:00:03.530) 0:03:26.247 ****** ", "changed: [ceph-0 -> 192.168.24.18] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==', u'name': u'client.openstack'}) => {\"changed\": true, \"checksum\": \"40ed8b50cf9c2c93b1fd620a66672adaecbdd5ae\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59/etc/ceph/ceph.client.openstack.keyring\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"md5sum\": \"a6757c87664e50e0fa2a4a0c24ffa2db\", \"remote_checksum\": \"40ed8b50cf9c2c93b1fd620a66672adaecbdd5ae\", \"remote_md5sum\": null}", "changed: [ceph-0 -> 192.168.24.18] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==', u'name': u'client.manila'}) => {\"changed\": true, \"checksum\": \"e119bc7d0367829cffba7f254fed5c0f7663e7a7\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59/etc/ceph/ceph.client.manila.keyring\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"md5sum\": \"d42eb2e49e090ff13248fba0db5c0a6f\", \"remote_checksum\": \"e119bc7d0367829cffba7f254fed5c0f7663e7a7\", \"remote_md5sum\": null}", "changed: [ceph-0 -> 192.168.24.18] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==', u'name': u'client.radosgw'}) => {\"changed\": true, \"checksum\": \"32018e3d91a7d0c0ff43f9db5459f66424dd1f38\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/8fedf068-bd95-11e8-ba69-5254006eda59/etc/ceph/ceph.client.radosgw.keyring\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"md5sum\": \"18e4740c17d5c0f4ef7090358897cb02\", \"remote_checksum\": \"32018e3d91a7d0c0ff43f9db5459f66424dd1f38\", \"remote_md5sum\": null}", "", "TASK [ceph-osd : copy to other mons the openstack cephx key(s)] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:85", "Friday 21 September 2018 08:29:44 -0400 (0:00:00.606) 0:03:26.853 ****** ", "changed: [ceph-0 -> 192.168.24.18] => (item=[u'controller-0', {u'name': u'client.openstack', u'mode': u'0600', u'key': u'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==', u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}}]) => {\"changed\": true, \"checksum\": \"40ed8b50cf9c2c93b1fd620a66672adaecbdd5ae\", \"dest\": \"/etc/ceph/ceph.client.openstack.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.openstack.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 253, \"state\": \"file\", \"uid\": 167}", "changed: [ceph-0 -> 192.168.24.18] => (item=[u'controller-0', {u'name': u'client.manila', u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==', u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", u'mgr': u'allow *'}}]) => {\"changed\": true, \"checksum\": \"e119bc7d0367829cffba7f254fed5c0f7663e7a7\", \"dest\": \"/etc/ceph/ceph.client.manila.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==\", \"mode\": \"0600\", \"name\": \"client.manila\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.manila.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 268, \"state\": \"file\", \"uid\": 167}", "changed: [ceph-0 -> 192.168.24.18] => (item=[u'controller-0', {u'name': u'client.radosgw', u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==', u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}}]) => {\"changed\": true, \"checksum\": \"32018e3d91a7d0c0ff43f9db5459f66424dd1f38\", \"dest\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 134, \"state\": \"file\", \"uid\": 167}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******", "Friday 21 September 2018 08:29:46 -0400 (0:00:01.276) 0:03:28.130 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mon restart script] **********************", "Friday 21 September 2018 08:29:46 -0400 (0:00:00.195) 0:03:28.325 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***", "Friday 21 September 2018 08:29:46 -0400 (0:00:00.045) 0:03:28.370 ****** ", "skipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******", "Friday 21 September 2018 08:29:46 -0400 (0:00:00.083) 0:03:28.454 ****** ", "skipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********", "Friday 21 September 2018 08:29:46 -0400 (0:00:00.080) 0:03:28.534 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******", "Friday 21 September 2018 08:29:46 -0400 (0:00:00.202) 0:03:28.736 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy osd restart script] **********************", "Friday 21 September 2018 08:29:46 -0400 (0:00:00.194) 0:03:28.931 ****** ", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"6631c34a339c45ab1081b01015293e952e36893e\", \"dest\": \"/tmp/restart_osd_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"308c89936c25e77f74e78c1e4905ee1a\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_tmp_t:s0\", \"size\": 3081, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537532987.03-152820220133009/source\", \"state\": \"file\", \"uid\": 0}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***", "Friday 21 September 2018 08:29:47 -0400 (0:00:00.652) 0:03:29.583 ****** ", "skipping: [ceph-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******", "Friday 21 September 2018 08:29:47 -0400 (0:00:00.094) 0:03:29.678 ****** ", "skipping: [ceph-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********", "Friday 21 September 2018 08:29:47 -0400 (0:00:00.104) 0:03:29.783 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******", "Friday 21 September 2018 08:29:47 -0400 (0:00:00.203) 0:03:29.986 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mds restart script] **********************", "Friday 21 September 2018 08:29:48 -0400 (0:00:00.204) 0:03:30.191 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***", "Friday 21 September 2018 08:29:48 -0400 (0:00:00.045) 0:03:30.236 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******", "Friday 21 September 2018 08:29:48 -0400 (0:00:00.049) 0:03:30.286 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********", "Friday 21 September 2018 08:29:48 -0400 (0:00:00.048) 0:03:30.335 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******", "Friday 21 September 2018 08:29:48 -0400 (0:00:00.209) 0:03:30.544 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************", "Friday 21 September 2018 08:29:48 -0400 (0:00:00.205) 0:03:30.749 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***", "Friday 21 September 2018 08:29:48 -0400 (0:00:00.049) 0:03:30.799 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******", "Friday 21 September 2018 08:29:48 -0400 (0:00:00.057) 0:03:30.856 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********", "Friday 21 September 2018 08:29:48 -0400 (0:00:00.059) 0:03:30.916 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***", "Friday 21 September 2018 08:29:48 -0400 (0:00:00.188) 0:03:31.104 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************", "Friday 21 September 2018 08:29:49 -0400 (0:00:00.196) 0:03:31.300 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***", "Friday 21 September 2018 08:29:49 -0400 (0:00:00.134) 0:03:31.435 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***", "Friday 21 September 2018 08:29:49 -0400 (0:00:00.055) 0:03:31.491 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***", "Friday 21 September 2018 08:29:49 -0400 (0:00:00.052) 0:03:31.543 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******", "Friday 21 September 2018 08:29:49 -0400 (0:00:00.073) 0:03:31.616 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************", "Friday 21 September 2018 08:29:49 -0400 (0:00:00.076) 0:03:31.693 ****** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***", "Friday 21 September 2018 08:29:49 -0400 (0:00:00.048) 0:03:31.742 ****** ", "skipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******", "Friday 21 September 2018 08:29:49 -0400 (0:00:00.102) 0:03:31.845 ****** ", "skipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********", "Friday 21 September 2018 08:29:49 -0400 (0:00:00.091) 0:03:31.936 ****** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [set ceph osd install 'Complete'] *****************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:157", "Friday 21 September 2018 08:29:49 -0400 (0:00:00.101) 0:03:32.038 ****** ", "ok: [ceph-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_osd\": {\"end\": \"20180921082949Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "PLAY [mdss] ********************************************************************", "skipping: no hosts matched", "", "PLAY [rgws] ********************************************************************", "skipping: no hosts matched", "", "PLAY [nfss] ********************************************************************", "skipping: no hosts matched", "", "PLAY [rbdmirrors] **************************************************************", "skipping: no hosts matched", "", "PLAY [restapis] ****************************************************************", "skipping: no hosts matched", "", "PLAY [clients] *****************************************************************", "", "TASK [set ceph client install 'In Progress'] ***********************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:308", "Friday 21 September 2018 08:29:50 -0400 (0:00:00.174) 0:03:32.212 ****** ", "ok: [compute-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_client\": {\"start\": \"20180921082950Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [ceph-defaults : check for a mon container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2", "Friday 21 September 2018 08:29:50 -0400 (0:00:00.087) 0:03:32.299 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for an osd container] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11", "Friday 21 September 2018 08:29:50 -0400 (0:00:00.047) 0:03:32.347 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mds container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20", "Friday 21 September 2018 08:29:50 -0400 (0:00:00.049) 0:03:32.396 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rgw container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29", "Friday 21 September 2018 08:29:50 -0400 (0:00:00.047) 0:03:32.444 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mgr container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38", "Friday 21 September 2018 08:29:50 -0400 (0:00:00.048) 0:03:32.492 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rbd mirror container] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47", "Friday 21 September 2018 08:29:50 -0400 (0:00:00.048) 0:03:32.540 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a nfs container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56", "Friday 21 September 2018 08:29:50 -0400 (0:00:00.049) 0:03:32.589 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mon socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2", "Friday 21 September 2018 08:29:50 -0400 (0:00:00.051) 0:03:32.641 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mon socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11", "Friday 21 September 2018 08:29:50 -0400 (0:00:00.055) 0:03:32.697 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21", "Friday 21 September 2018 08:29:50 -0400 (0:00:00.048) 0:03:32.745 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph osd socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30", "Friday 21 September 2018 08:29:50 -0400 (0:00:00.049) 0:03:32.795 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph osd socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40", "Friday 21 September 2018 08:29:50 -0400 (0:00:00.049) 0:03:32.844 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50", "Friday 21 September 2018 08:29:50 -0400 (0:00:00.046) 0:03:32.890 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mds socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59", "Friday 21 September 2018 08:29:50 -0400 (0:00:00.045) 0:03:32.936 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mds socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69", "Friday 21 September 2018 08:29:50 -0400 (0:00:00.047) 0:03:32.983 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79", "Friday 21 September 2018 08:29:50 -0400 (0:00:00.057) 0:03:33.040 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rgw socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88", "Friday 21 September 2018 08:29:50 -0400 (0:00:00.050) 0:03:33.091 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98", "Friday 21 September 2018 08:29:51 -0400 (0:00:00.053) 0:03:33.145 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108", "Friday 21 September 2018 08:29:51 -0400 (0:00:00.048) 0:03:33.193 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mgr socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117", "Friday 21 September 2018 08:29:51 -0400 (0:00:00.044) 0:03:33.237 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127", "Friday 21 September 2018 08:29:51 -0400 (0:00:00.043) 0:03:33.281 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137", "Friday 21 September 2018 08:29:51 -0400 (0:00:00.048) 0:03:33.330 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rbd mirror socket] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146", "Friday 21 September 2018 08:29:51 -0400 (0:00:00.043) 0:03:33.373 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156", "Friday 21 September 2018 08:29:51 -0400 (0:00:00.040) 0:03:33.414 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166", "Friday 21 September 2018 08:29:51 -0400 (0:00:00.045) 0:03:33.459 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175", "Friday 21 September 2018 08:29:51 -0400 (0:00:00.048) 0:03:33.507 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184", "Friday 21 September 2018 08:29:51 -0400 (0:00:00.046) 0:03:33.554 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194", "Friday 21 September 2018 08:29:51 -0400 (0:00:00.048) 0:03:33.602 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if it is atomic host] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2", "Friday 21 September 2018 08:29:51 -0400 (0:00:00.047) 0:03:33.650 ****** ", "ok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact is_atomic] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7", "Friday 21 September 2018 08:29:51 -0400 (0:00:00.233) 0:03:33.884 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11", "Friday 21 September 2018 08:29:51 -0400 (0:00:00.069) 0:03:33.953 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"monitor_name\": \"compute-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17", "Friday 21 September 2018 08:29:51 -0400 (0:00:00.077) 0:03:34.031 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact docker_exec_cmd] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23", "Friday 21 September 2018 08:29:51 -0400 (0:00:00.068) 0:03:34.099 ****** ", "ok: [compute-0 -> 192.168.24.18] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : is ceph running already?] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34", "Friday 21 September 2018 08:29:52 -0400 (0:00:00.151) 0:03:34.251 ****** ", "ok: [compute-0 -> 192.168.24.18] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"-s\", \"-f\", \"json\"], \"delta\": \"0:00:00.331806\", \"end\": \"2018-09-21 12:29:52.679307\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:29:52.347501\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\\n{\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"modified\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"created\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.16:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.16:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":17,\\\"num_osds\\\":5,\\\"num_up_osds\\\":5,\\\"num_in_osds\\\":5,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[{\\\"state_name\\\":\\\"active+clean\\\",\\\"count\\\":160}],\\\"num_pgs\\\":160,\\\"num_pools\\\":5,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":564191232,\\\"bytes_avail\\\":55749480448,\\\"bytes_total\\\":56313671680},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":7,\\\"active_gid\\\":4104,\\\"active_name\\\":\\\"controller-0\\\",\\\"active_addr\\\":\\\"172.17.3.16:6800/79\\\",\\\"available\\\":true,\\\"standbys\\\":[],\\\"modules\\\":[\\\"status\\\"],\\\"available_modules\\\":[\\\"balancer\\\",\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"restful\\\",\\\"selftest\\\",\\\"status\\\",\\\"zabbix\\\"],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\", \"stdout_lines\": [\"\", \"{\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"health\\\":{\\\"checks\\\":{},\\\"status\\\":\\\"HEALTH_OK\\\",\\\"summary\\\":[{\\\"severity\\\":\\\"HEALTH_WARN\\\",\\\"summary\\\":\\\"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\\\"}],\\\"overall_status\\\":\\\"HEALTH_WARN\\\"},\\\"election_epoch\\\":3,\\\"quorum\\\":[0],\\\"quorum_names\\\":[\\\"controller-0\\\"],\\\"monmap\\\":{\\\"epoch\\\":1,\\\"fsid\\\":\\\"8fedf068-bd95-11e8-ba69-5254006eda59\\\",\\\"modified\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"created\\\":\\\"2018-09-21 12:27:11.445099\\\",\\\"features\\\":{\\\"persistent\\\":[\\\"kraken\\\",\\\"luminous\\\"],\\\"optional\\\":[]},\\\"mons\\\":[{\\\"rank\\\":0,\\\"name\\\":\\\"controller-0\\\",\\\"addr\\\":\\\"172.17.3.16:6789/0\\\",\\\"public_addr\\\":\\\"172.17.3.16:6789/0\\\"}]},\\\"osdmap\\\":{\\\"osdmap\\\":{\\\"epoch\\\":17,\\\"num_osds\\\":5,\\\"num_up_osds\\\":5,\\\"num_in_osds\\\":5,\\\"full\\\":false,\\\"nearfull\\\":false,\\\"num_remapped_pgs\\\":0}},\\\"pgmap\\\":{\\\"pgs_by_state\\\":[{\\\"state_name\\\":\\\"active+clean\\\",\\\"count\\\":160}],\\\"num_pgs\\\":160,\\\"num_pools\\\":5,\\\"num_objects\\\":0,\\\"data_bytes\\\":0,\\\"bytes_used\\\":564191232,\\\"bytes_avail\\\":55749480448,\\\"bytes_total\\\":56313671680},\\\"fsmap\\\":{\\\"epoch\\\":1,\\\"by_rank\\\":[]},\\\"mgrmap\\\":{\\\"epoch\\\":7,\\\"active_gid\\\":4104,\\\"active_name\\\":\\\"controller-0\\\",\\\"active_addr\\\":\\\"172.17.3.16:6800/79\\\",\\\"available\\\":true,\\\"standbys\\\":[],\\\"modules\\\":[\\\"status\\\"],\\\"available_modules\\\":[\\\"balancer\\\",\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"restful\\\",\\\"selftest\\\",\\\"status\\\",\\\"zabbix\\\"],\\\"services\\\":{}},\\\"servicemap\\\":{\\\"epoch\\\":1,\\\"modified\\\":\\\"0.000000\\\",\\\"services\\\":{}}}\"]}", "", "TASK [ceph-defaults : check if /var/lib/mistral/overcloud/ceph-ansible/fetch_dir directory exists] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47", "Friday 21 September 2018 08:29:52 -0400 (0:00:00.600) 0:03:34.851 ****** ", "ok: [compute-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact ceph_current_status rc 1] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57", "Friday 21 September 2018 08:29:52 -0400 (0:00:00.214) 0:03:35.066 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : create a local fetch directory if it does not exist] *****", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64", "Friday 21 September 2018 08:29:53 -0400 (0:00:00.055) 0:03:35.121 ****** ", "ok: [compute-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 80, \"state\": \"directory\", \"uid\": 42430}", "", "TASK [ceph-defaults : set_fact ceph_current_status (convert to json)] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74", "Friday 21 September 2018 08:29:53 -0400 (0:00:00.186) 0:03:35.308 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_current_status\": {\"election_epoch\": 3, \"fsid\": \"8fedf068-bd95-11e8-ba69-5254006eda59\", \"fsmap\": {\"by_rank\": [], \"epoch\": 1}, \"health\": {\"checks\": {}, \"overall_status\": \"HEALTH_WARN\", \"status\": \"HEALTH_OK\", \"summary\": [{\"severity\": \"HEALTH_WARN\", \"summary\": \"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\"}]}, \"mgrmap\": {\"active_addr\": \"172.17.3.16:6800/79\", \"active_gid\": 4104, \"active_name\": \"controller-0\", \"available\": true, \"available_modules\": [\"balancer\", \"dashboard\", \"influx\", \"localpool\", \"prometheus\", \"restful\", \"selftest\", \"status\", \"zabbix\"], \"epoch\": 7, \"modules\": [\"status\"], \"services\": {}, \"standbys\": []}, \"monmap\": {\"created\": \"2018-09-21 12:27:11.445099\", \"epoch\": 1, \"features\": {\"optional\": [], \"persistent\": [\"kraken\", \"luminous\"]}, \"fsid\": \"8fedf068-bd95-11e8-ba69-5254006eda59\", \"modified\": \"2018-09-21 12:27:11.445099\", \"mons\": [{\"addr\": \"172.17.3.16:6789/0\", \"name\": \"controller-0\", \"public_addr\": \"172.17.3.16:6789/0\", \"rank\": 0}]}, \"osdmap\": {\"osdmap\": {\"epoch\": 17, \"full\": false, \"nearfull\": false, \"num_in_osds\": 5, \"num_osds\": 5, \"num_remapped_pgs\": 0, \"num_up_osds\": 5}}, \"pgmap\": {\"bytes_avail\": 55749480448, \"bytes_total\": 56313671680, \"bytes_used\": 564191232, \"data_bytes\": 0, \"num_objects\": 0, \"num_pgs\": 160, \"num_pools\": 5, \"pgs_by_state\": [{\"count\": 160, \"state_name\": \"active+clean\"}]}, \"quorum\": [0], \"quorum_names\": [\"controller-0\"], \"servicemap\": {\"epoch\": 1, \"modified\": \"0.000000\", \"services\": {}}}}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact fsid from ceph_current_status] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81", "Friday 21 September 2018 08:29:53 -0400 (0:00:00.083) 0:03:35.392 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"fsid\": \"8fedf068-bd95-11e8-ba69-5254006eda59\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:88", "Friday 21 September 2018 08:29:53 -0400 (0:00:00.078) 0:03:35.470 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}", "", "TASK [ceph-defaults : generate cluster fsid] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:92", "Friday 21 September 2018 08:29:53 -0400 (0:00:00.210) 0:03:35.680 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:103", "Friday 21 September 2018 08:29:53 -0400 (0:00:00.049) 0:03:35.730 ****** ", "ok: [compute-0 -> localhost] => {\"changed\": false, \"cmd\": \"echo 8fedf068-bd95-11e8-ba69-5254006eda59 | tee /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"rc\": 0, \"stdout\": \"skipped, since /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\", \"stdout_lines\": [\"skipped, since /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\"]}", "", "TASK [ceph-defaults : read cluster fsid if it already exists] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:112", "Friday 21 September 2018 08:29:53 -0400 (0:00:00.220) 0:03:35.951 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:124", "Friday 21 September 2018 08:29:53 -0400 (0:00:00.043) 0:03:35.995 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:130", "Friday 21 September 2018 08:29:53 -0400 (0:00:00.044) 0:03:36.039 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"mds_name\": \"compute-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:136", "Friday 21 September 2018 08:29:54 -0400 (0:00:00.217) 0:03:36.256 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142", "Friday 21 September 2018 08:29:54 -0400 (0:00:00.047) 0:03:36.304 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149", "Friday 21 September 2018 08:29:54 -0400 (0:00:00.222) 0:03:36.527 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156", "Friday 21 September 2018 08:29:54 -0400 (0:00:00.215) 0:03:36.742 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}", "", "TASK [ceph-defaults : resolve device link(s)] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:163", "Friday 21 September 2018 08:29:54 -0400 (0:00:00.202) 0:03:36.945 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:173", "Friday 21 September 2018 08:29:54 -0400 (0:00:00.054) 0:03:36.999 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build final devices list] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:182", "Friday 21 September 2018 08:29:54 -0400 (0:00:00.057) 0:03:37.057 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190", "Friday 21 September 2018 08:29:54 -0400 (0:00:00.048) 0:03:37.106 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197", "Friday 21 September 2018 08:29:55 -0400 (0:00:00.047) 0:03:37.153 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204", "Friday 21 September 2018 08:29:55 -0400 (0:00:00.047) 0:03:37.201 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211", "Friday 21 September 2018 08:29:55 -0400 (0:00:00.047) 0:03:37.248 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:218", "Friday 21 September 2018 08:29:55 -0400 (0:00:00.048) 0:03:37.296 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rgw_hostname - fqdn] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:225", "Friday 21 September 2018 08:29:55 -0400 (0:00:00.200) 0:03:37.497 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rgw_hostname - no fqdn] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:235", "Friday 21 September 2018 08:29:55 -0400 (0:00:00.046) 0:03:37.543 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_directories] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2", "Friday 21 September 2018 08:29:55 -0400 (0:00:00.048) 0:03:37.591 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}", "", "TASK [ceph-defaults : create ceph initial directories] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18", "Friday 21 September 2018 08:29:55 -0400 (0:00:00.188) 0:03:37.780 ****** ", "changed: [compute-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-docker-common : fail if systemd is not present] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2", "Friday 21 September 2018 08:29:57 -0400 (0:00:02.232) 0:03:40.013 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2", "Friday 21 September 2018 08:29:57 -0400 (0:00:00.052) 0:03:40.065 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11", "Friday 21 September 2018 08:29:58 -0400 (0:00:00.052) 0:03:40.118 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : warning deprecation for fqdn configuration] *********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:20", "Friday 21 September 2018 08:29:58 -0400 (0:00:00.056) 0:03:40.174 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove ceph udev rules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2", "Friday 21 September 2018 08:29:58 -0400 (0:00:00.052) 0:03:40.226 ****** ", "ok: [compute-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}", "ok: [compute-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14", "Friday 21 September 2018 08:29:58 -0400 (0:00:00.439) 0:03:40.666 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"monitor_name\": \"compute-0\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20", "Friday 21 September 2018 08:29:58 -0400 (0:00:00.095) 0:03:40.761 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get docker version] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26", "Friday 21 September 2018 08:29:58 -0400 (0:00:00.055) 0:03:40.817 ****** ", "ok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.029330\", \"end\": \"2018-09-21 12:29:58.943968\", \"rc\": 0, \"start\": \"2018-09-21 12:29:58.914638\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 6e3bb8e/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 6e3bb8e/1.13.1\"]}", "", "TASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32", "Friday 21 September 2018 08:29:58 -0400 (0:00:00.279) 0:03:41.097 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}", "", "TASK [ceph-docker-common : check if a cluster is already running] **************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42", "Friday 21 September 2018 08:29:59 -0400 (0:00:00.085) 0:03:41.183 ****** ", "ok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-compute-0\"], \"delta\": \"0:00:00.025713\", \"end\": \"2018-09-21 12:29:59.296573\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:29:59.270860\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2", "Friday 21 September 2018 08:29:59 -0400 (0:00:00.265) 0:03:41.448 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13", "Friday 21 September 2018 08:29:59 -0400 (0:00:00.054) 0:03:41.503 ****** ", "skipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20", "Friday 21 September 2018 08:29:59 -0400 (0:00:00.061) 0:03:41.565 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25", "Friday 21 September 2018 08:29:59 -0400 (0:00:00.065) 0:03:41.630 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : stat for ceph config and keys] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30", "Friday 21 September 2018 08:29:59 -0400 (0:00:00.059) 0:03:41.690 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : fail if we find existing cluster files] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5", "Friday 21 September 2018 08:29:59 -0400 (0:00:00.053) 0:03:41.743 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on atomic] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2", "Friday 21 September 2018 08:29:59 -0400 (0:00:00.058) 0:03:41.802 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6", "Friday 21 September 2018 08:29:59 -0400 (0:00:00.051) 0:03:41.854 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on redhat or suse] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2", "Friday 21 September 2018 08:29:59 -0400 (0:00:00.055) 0:03:41.909 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on redhat or suse] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13", "Friday 21 September 2018 08:29:59 -0400 (0:00:00.067) 0:03:41.976 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7", "Friday 21 September 2018 08:29:59 -0400 (0:00:00.062) 0:03:42.038 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on debian] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2", "Friday 21 September 2018 08:29:59 -0400 (0:00:00.058) 0:03:42.097 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on debian] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11", "Friday 21 September 2018 08:30:00 -0400 (0:00:00.057) 0:03:42.155 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7", "Friday 21 September 2018 08:30:00 -0400 (0:00:00.054) 0:03:42.210 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mon container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3", "Friday 21 September 2018 08:30:00 -0400 (0:00:00.050) 0:03:42.260 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph osd container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12", "Friday 21 September 2018 08:30:00 -0400 (0:00:00.054) 0:03:42.314 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mds container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21", "Friday 21 September 2018 08:30:00 -0400 (0:00:00.053) 0:03:42.368 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rgw container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30", "Friday 21 September 2018 08:30:00 -0400 (0:00:00.051) 0:03:42.420 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mgr container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39", "Friday 21 September 2018 08:30:00 -0400 (0:00:00.051) 0:03:42.472 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rbd mirror container] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48", "Friday 21 September 2018 08:30:00 -0400 (0:00:00.054) 0:03:42.526 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph nfs container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57", "Friday 21 September 2018 08:30:00 -0400 (0:00:00.051) 0:03:42.577 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67", "Friday 21 September 2018 08:30:00 -0400 (0:00:00.052) 0:03:42.630 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76", "Friday 21 September 2018 08:30:00 -0400 (0:00:00.056) 0:03:42.686 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85", "Friday 21 September 2018 08:30:00 -0400 (0:00:00.050) 0:03:42.737 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94", "Friday 21 September 2018 08:30:00 -0400 (0:00:00.048) 0:03:42.785 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103", "Friday 21 September 2018 08:30:00 -0400 (0:00:00.049) 0:03:42.835 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112", "Friday 21 September 2018 08:30:00 -0400 (0:00:00.050) 0:03:42.885 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121", "Friday 21 September 2018 08:30:00 -0400 (0:00:00.049) 0:03:42.935 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130", "Friday 21 September 2018 08:30:00 -0400 (0:00:00.059) 0:03:42.994 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137", "Friday 21 September 2018 08:30:00 -0400 (0:00:00.050) 0:03:43.045 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144", "Friday 21 September 2018 08:30:00 -0400 (0:00:00.051) 0:03:43.096 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151", "Friday 21 September 2018 08:30:01 -0400 (0:00:00.052) 0:03:43.149 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158", "Friday 21 September 2018 08:30:01 -0400 (0:00:00.050) 0:03:43.199 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165", "Friday 21 September 2018 08:30:01 -0400 (0:00:00.050) 0:03:43.250 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172", "Friday 21 September 2018 08:30:01 -0400 (0:00:00.057) 0:03:43.307 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-12 image] ********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179", "Friday 21 September 2018 08:30:01 -0400 (0:00:00.052) 0:03:43.360 ****** ", "ok: [compute-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:13.506012\", \"end\": \"2018-09-21 12:30:14.957705\", \"rc\": 0, \"start\": \"2018-09-21 12:30:01.451693\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-12: Pulling from 192.168.24.1:8787/rhceph\\n428a9ca37f0e: Pulling fs layer\\n8115a58d83bd: Pulling fs layer\\n5e409f26eefe: Pulling fs layer\\n8115a58d83bd: Download complete\\n428a9ca37f0e: Verifying Checksum\\n428a9ca37f0e: Download complete\\n5e409f26eefe: Verifying Checksum\\n5e409f26eefe: Download complete\\n428a9ca37f0e: Pull complete\\n8115a58d83bd: Pull complete\\n5e409f26eefe: Pull complete\\nDigest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-12: Pulling from 192.168.24.1:8787/rhceph\", \"428a9ca37f0e: Pulling fs layer\", \"8115a58d83bd: Pulling fs layer\", \"5e409f26eefe: Pulling fs layer\", \"8115a58d83bd: Download complete\", \"428a9ca37f0e: Verifying Checksum\", \"428a9ca37f0e: Download complete\", \"5e409f26eefe: Verifying Checksum\", \"5e409f26eefe: Download complete\", \"428a9ca37f0e: Pull complete\", \"8115a58d83bd: Pull complete\", \"5e409f26eefe: Pull complete\", \"Digest: sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-12\"]}", "", "TASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-12 image after pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189", "Friday 21 September 2018 08:30:15 -0400 (0:00:13.756) 0:03:57.116 ****** ", "changed: [compute-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-12\"], \"delta\": \"0:00:00.024215\", \"end\": \"2018-09-21 12:30:15.230189\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-09-21 12:30:15.205974\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-12\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"12\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 592066185,\\n \\\"VirtualSize\\\": 592066185,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/d3ab66fd3e2597dbe6fc7283ddb6892de69d901df9f733b3e6c08b44844d82eb/diff:/var/lib/docker/overlay2/fb0ca68008a4f6a2f8fe648a8a5da392d76df2d766b5494fff02e603d0bbd0a8/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/b7a38e55a0284b5bd8ffe62ac3c56122035f224aa471de3dfe30baf1dd98a92d/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/b7a38e55a0284b5bd8ffe62ac3c56122035f224aa471de3dfe30baf1dd98a92d/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/b7a38e55a0284b5bd8ffe62ac3c56122035f224aa471de3dfe30baf1dd98a92d/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\\n \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\\n \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fa3b551f095247f90f7ace123cc14519b26abd4c042eb0270ab2452d9636a41f\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-12\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-08-06T22:30:33.81313Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z5-b1e8f.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:379a20daa6033d04119c4ca45fffe3e50f0cfd517d8712a222b53bea11ee4493\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"2aee9f5752ab\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"9e41cab2948f6e02bef2d4df2d2d21f082f2e9f5b5aadcb04d70201596959834\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"0f7f19e59769ff8086fdc8b92bbfe34a4738ee56\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-08-06T22:27:39.213799\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-012.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"12\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-12\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"ef3644dca4abfb12c35763dd708194bad06c2dc3\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 592066185,\", \" \\\"VirtualSize\\\": 592066185,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/d3ab66fd3e2597dbe6fc7283ddb6892de69d901df9f733b3e6c08b44844d82eb/diff:/var/lib/docker/overlay2/fb0ca68008a4f6a2f8fe648a8a5da392d76df2d766b5494fff02e603d0bbd0a8/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/b7a38e55a0284b5bd8ffe62ac3c56122035f224aa471de3dfe30baf1dd98a92d/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/b7a38e55a0284b5bd8ffe62ac3c56122035f224aa471de3dfe30baf1dd98a92d/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/b7a38e55a0284b5bd8ffe62ac3c56122035f224aa471de3dfe30baf1dd98a92d/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:db195156f4cd9e83cf2a76f1319d5f839cf2552ea1d23c0317931786b1f594cf\\\",\", \" \\\"sha256:6e8ca199394f13d2b08b652f8281d3c2f8ad22333737e8ad2ff554f881bcd8a1\\\",\", \" \\\"sha256:984d7131485eaebe7b45bb3052fae34a956316f94faf95681b8480a904179cfa\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194", "Friday 21 September 2018 08:30:15 -0400 (0:00:00.280) 0:03:57.397 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:a26f4c12ef6c33b5d46b23badbf34ff85506bcce06570d6504194bb7453ed44c\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200", "Friday 21 September 2018 08:30:15 -0400 (0:00:00.088) 0:03:57.486 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211", "Friday 21 September 2018 08:30:15 -0400 (0:00:00.052) 0:03:57.539 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222", "Friday 21 September 2018 08:30:15 -0400 (0:00:00.049) 0:03:57.589 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233", "Friday 21 September 2018 08:30:15 -0400 (0:00:00.055) 0:03:57.644 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244", "Friday 21 September 2018 08:30:15 -0400 (0:00:00.055) 0:03:57.700 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255", "Friday 21 September 2018 08:30:15 -0400 (0:00:00.051) 0:03:57.751 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266", "Friday 21 September 2018 08:30:15 -0400 (0:00:00.050) 0:03:57.801 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : export local ceph dev image] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277", "Friday 21 September 2018 08:30:15 -0400 (0:00:00.056) 0:03:57.858 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : copy ceph dev image file] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285", "Friday 21 September 2018 08:30:15 -0400 (0:00:00.049) 0:03:57.908 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : load ceph dev image] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292", "Friday 21 September 2018 08:30:15 -0400 (0:00:00.046) 0:03:57.955 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove tmp ceph dev image file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297", "Friday 21 September 2018 08:30:15 -0400 (0:00:00.052) 0:03:58.008 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get ceph version] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84", "Friday 21 September 2018 08:30:15 -0400 (0:00:00.048) 0:03:58.056 ****** ", "ok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-12\", \"--version\"], \"delta\": \"0:00:00.441057\", \"end\": \"2018-09-21 12:30:16.689421\", \"rc\": 0, \"start\": \"2018-09-21 12:30:16.248364\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-42.el7cp (f73642baacccbf2a3c254d1fb5f0317b933b28cf) luminous (stable)\"]}", "", "TASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90", "Friday 21 September 2018 08:30:16 -0400 (0:00:00.786) 0:03:58.843 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-42.el7cp\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release jewel] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2", "Friday 21 September 2018 08:30:16 -0400 (0:00:00.083) 0:03:58.926 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release kraken] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8", "Friday 21 September 2018 08:30:16 -0400 (0:00:00.050) 0:03:58.977 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release luminous] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14", "Friday 21 September 2018 08:30:16 -0400 (0:00:00.061) 0:03:59.038 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release mimic] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20", "Friday 21 September 2018 08:30:17 -0400 (0:00:00.207) 0:03:59.246 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release nautilus] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26", "Friday 21 September 2018 08:30:17 -0400 (0:00:00.050) 0:03:59.296 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : create bootstrap directories] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2", "Friday 21 September 2018 08:30:17 -0400 (0:00:00.054) 0:03:59.351 ****** ", "changed: [compute-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "", "TASK [ceph-config : create ceph conf directory] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4", "Friday 21 September 2018 08:30:18 -0400 (0:00:00.915) 0:04:00.266 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate ceph configuration file: ceph.conf] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12", "Friday 21 September 2018 08:30:18 -0400 (0:00:00.048) 0:04:00.314 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : create a local fetch directory if it does not exist] *******", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38", "Friday 21 September 2018 08:30:18 -0400 (0:00:00.052) 0:04:00.367 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate cluster uuid] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54", "Friday 21 September 2018 08:30:18 -0400 (0:00:00.062) 0:04:00.429 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : read cluster uuid if it already exists] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64", "Friday 21 September 2018 08:30:18 -0400 (0:00:00.052) 0:04:00.482 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : ensure /etc/ceph exists] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76", "Friday 21 September 2018 08:30:18 -0400 (0:00:00.049) 0:04:00.532 ****** ", "changed: [compute-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-config : generate ceph.conf configuration file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84", "Friday 21 September 2018 08:30:18 -0400 (0:00:00.362) 0:04:00.895 ****** ", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy mon restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for compute-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy osd restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for compute-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy mds restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for compute-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy rgw restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for compute-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy mgr restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for compute-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for compute-0", "changed: [compute-0] => {\"changed\": true, \"checksum\": \"47fa113e6b0aba60bb5249f924dcb7ca6e8dca0c\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"c3aeecdba6e11cab925f4842591b2d45\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 1320, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537533018.95-127994901059470/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-config : set fsid fact when generate_fsid = true] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102", "Friday 21 September 2018 08:30:21 -0400 (0:00:02.288) 0:04:03.183 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-client : copy ceph admin keyring when non containerized deployment] ***", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml:2", "Friday 21 September 2018 08:30:21 -0400 (0:00:00.050) 0:04:03.234 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-client : set_fact keys_tmp - preserve backward compatibility after the introduction of the ceph_keys module] ***", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:2", "Friday 21 September 2018 08:30:21 -0400 (0:00:00.055) 0:04:03.289 ****** ", "skipping: [compute-0] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==', u'name': u'client.openstack'}) => {\"changed\": false, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [compute-0] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==', u'name': u'client.manila'}) => {\"changed\": false, \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [compute-0] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==', u'name': u'client.radosgw'}) => {\"changed\": false, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-client : set_fact keys - override keys_tmp with keys] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:9", "Friday 21 September 2018 08:30:21 -0400 (0:00:00.092) 0:04:03.382 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-client : run a dummy container (sleep 300) from where we can create pool(s)/key(s)] ***", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:15", "Friday 21 September 2018 08:30:21 -0400 (0:00:00.065) 0:04:03.447 ****** ", "ok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"-d\", \"-v\", \"/etc/ceph:/etc/ceph:z\", \"--name\", \"ceph-create-keys\", \"--entrypoint=sleep\", \"192.168.24.1:8787/rhceph:3-12\", \"300\"], \"delta\": \"0:00:00.210257\", \"end\": \"2018-09-21 12:30:21.744937\", \"rc\": 0, \"start\": \"2018-09-21 12:30:21.534680\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"94e76e8372b8e6524bd2fa9c447c557a45a925ed060eab206355cac799db4024\", \"stdout_lines\": [\"94e76e8372b8e6524bd2fa9c447c557a45a925ed060eab206355cac799db4024\"]}", "", "TASK [ceph-client : set_fact delegated_node] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:30", "Friday 21 September 2018 08:30:21 -0400 (0:00:00.450) 0:04:03.897 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"delegated_node\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-client : set_fact condition_copy_admin_key] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:34", "Friday 21 September 2018 08:30:21 -0400 (0:00:00.076) 0:04:03.973 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"condition_copy_admin_key\": true}, \"changed\": false}", "", "TASK [ceph-client : set_fact docker_exec_cmd] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:38", "Friday 21 September 2018 08:30:21 -0400 (0:00:00.077) 0:04:04.051 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0 \"}, \"changed\": false}", "", "TASK [ceph-client : create cephx key(s)] ***************************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:44", "Friday 21 September 2018 08:30:22 -0400 (0:00:00.145) 0:04:04.196 ****** ", "changed: [compute-0 -> 192.168.24.18] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==', u'name': u'client.openstack'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph/ceph.client.openstack.keyring\"], \"delta\": \"0:00:00.867358\", \"end\": \"2018-09-21 12:30:23.169661\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"rc\": 0, \"start\": \"2018-09-21 12:30:22.302303\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [compute-0 -> 192.168.24.18] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==', u'name': u'client.manila'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph/ceph.client.manila.keyring\"], \"delta\": \"0:00:00.928620\", \"end\": \"2018-09-21 12:30:24.277324\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"rc\": 0, \"start\": \"2018-09-21 12:30:23.348704\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [compute-0 -> 192.168.24.18] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==', u'name': u'client.radosgw'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph/ceph.client.radosgw.keyring\"], \"delta\": \"0:00:00.912810\", \"end\": \"2018-09-21 12:30:25.374281\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"rc\": 0, \"start\": \"2018-09-21 12:30:24.461471\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-client : slurp client cephx key(s)] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:62", "Friday 21 September 2018 08:30:25 -0400 (0:00:03.362) 0:04:07.559 ****** ", "ok: [compute-0 -> 192.168.24.18] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==', u'name': u'client.openstack'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUM5M0tSYkFBQUFBQkFBNzB2WG1YRUxKV2RxUHRnNEllUUh6dz09CgljYXBzIG1nciA9ICJhbGxvdyAqIgoJY2FwcyBtb24gPSAicHJvZmlsZSByYmQiCgljYXBzIG9zZCA9ICJwcm9maWxlIHJiZCBwb29sPXZvbHVtZXMsIHByb2ZpbGUgcmJkIHBvb2w9YmFja3VwcywgcHJvZmlsZSByYmQgcG9vbD12bXMsIHByb2ZpbGUgcmJkIHBvb2w9aW1hZ2VzLCBwcm9maWxlIHJiZCBwb29sPW1ldHJpY3MiCg==\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"source\": \"/etc/ceph/ceph.client.openstack.keyring\"}", "ok: [compute-0 -> 192.168.24.18] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==', u'name': u'client.manila'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUM5M0tSYkFBQUFBQkFBaDNXUUVyYVl2b0dKQmNXV2VBZ2xZZz09CgljYXBzIG1kcyA9ICJhbGxvdyAqIgoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHIsIGFsbG93IGNvbW1hbmQgJ2F1dGggZGVsJywgYWxsb3cgY29tbWFuZCAnYXV0aCBjYXBzJywgYWxsb3cgY29tbWFuZCAnYXV0aCBnZXQnLCBhbGxvdyBjb21tYW5kICdhdXRoIGdldC1vci1jcmVhdGUnIgoJY2FwcyBvc2QgPSAiYWxsb3cgcnciCg==\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"source\": \"/etc/ceph/ceph.client.manila.keyring\"}", "ok: [compute-0 -> 192.168.24.18] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==', u'name': u'client.radosgw'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFDOTNLUmJBQUFBQUJBQUpLL0FkT0N1YTlVT2NDR2V2ZSt6WUE9PQoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHJ3IgoJY2FwcyBvc2QgPSAiYWxsb3cgcnd4Igo=\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"source\": \"/etc/ceph/ceph.client.radosgw.keyring\"}", "", "TASK [ceph-client : list existing pool(s)] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:74", "Friday 21 September 2018 08:30:26 -0400 (0:00:00.603) 0:04:08.162 ****** ", "", "TASK [ceph-client : create ceph pool(s)] ***************************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:86", "Friday 21 September 2018 08:30:26 -0400 (0:00:00.048) 0:04:08.211 ****** ", "", "TASK [ceph-client : get client cephx keys] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:109", "Friday 21 September 2018 08:30:26 -0400 (0:00:00.045) 0:04:08.257 ****** ", "changed: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUM5M0tSYkFBQUFBQkFBNzB2WG1YRUxKV2RxUHRnNEllUUh6dz09CgljYXBzIG1nciA9ICJhbGxvdyAqIgoJY2FwcyBtb24gPSAicHJvZmlsZSByYmQiCgljYXBzIG9zZCA9ICJwcm9maWxlIHJiZCBwb29sPXZvbHVtZXMsIHByb2ZpbGUgcmJkIHBvb2w9YmFja3VwcywgcHJvZmlsZSByYmQgcG9vbD12bXMsIHByb2ZpbGUgcmJkIHBvb2w9aW1hZ2VzLCBwcm9maWxlIHJiZCBwb29sPW1ldHJpY3MiCg==', 'failed': False, u'source': u'/etc/ceph/ceph.client.openstack.keyring', 'item': {u'mode': u'0600', u'name': u'client.openstack', u'key': u'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==', u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.openstack.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.18'}, '_ansible_ignore_errors': None, '_ansible_item_label': {u'name': u'client.openstack', u'mode': u'0600', u'key': u'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==', u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}}}) => {\"changed\": true, \"checksum\": \"40ed8b50cf9c2c93b1fd620a66672adaecbdd5ae\", \"dest\": \"/etc/ceph/ceph.client.openstack.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUM5M0tSYkFBQUFBQkFBNzB2WG1YRUxKV2RxUHRnNEllUUh6dz09CgljYXBzIG1nciA9ICJhbGxvdyAqIgoJY2FwcyBtb24gPSAicHJvZmlsZSByYmQiCgljYXBzIG9zZCA9ICJwcm9maWxlIHJiZCBwb29sPXZvbHVtZXMsIHByb2ZpbGUgcmJkIHBvb2w9YmFja3VwcywgcHJvZmlsZSByYmQgcG9vbD12bXMsIHByb2ZpbGUgcmJkIHBvb2w9aW1hZ2VzLCBwcm9maWxlIHJiZCBwb29sPW1ldHJpY3MiCg==\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.openstack.keyring\"}}, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"source\": \"/etc/ceph/ceph.client.openstack.keyring\"}, \"md5sum\": \"a6757c87664e50e0fa2a4a0c24ffa2db\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 253, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537533026.24-153171000360498/source\", \"state\": \"file\", \"uid\": 167}", "changed: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUM5M0tSYkFBQUFBQkFBaDNXUUVyYVl2b0dKQmNXV2VBZ2xZZz09CgljYXBzIG1kcyA9ICJhbGxvdyAqIgoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHIsIGFsbG93IGNvbW1hbmQgJ2F1dGggZGVsJywgYWxsb3cgY29tbWFuZCAnYXV0aCBjYXBzJywgYWxsb3cgY29tbWFuZCAnYXV0aCBnZXQnLCBhbGxvdyBjb21tYW5kICdhdXRoIGdldC1vci1jcmVhdGUnIgoJY2FwcyBvc2QgPSAiYWxsb3cgcnciCg==', 'failed': False, u'source': u'/etc/ceph/ceph.client.manila.keyring', 'item': {u'mode': u'0600', u'name': u'client.manila', u'key': u'AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==', u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", u'mgr': u'allow *'}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.manila.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.18'}, '_ansible_ignore_errors': None, '_ansible_item_label': {u'name': u'client.manila', u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==', u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", u'mgr': u'allow *'}}}) => {\"changed\": true, \"checksum\": \"e119bc7d0367829cffba7f254fed5c0f7663e7a7\", \"dest\": \"/etc/ceph/ceph.client.manila.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUM5M0tSYkFBQUFBQkFBaDNXUUVyYVl2b0dKQmNXV2VBZ2xZZz09CgljYXBzIG1kcyA9ICJhbGxvdyAqIgoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHIsIGFsbG93IGNvbW1hbmQgJ2F1dGggZGVsJywgYWxsb3cgY29tbWFuZCAnYXV0aCBjYXBzJywgYWxsb3cgY29tbWFuZCAnYXV0aCBnZXQnLCBhbGxvdyBjb21tYW5kICdhdXRoIGdldC1vci1jcmVhdGUnIgoJY2FwcyBvc2QgPSAiYWxsb3cgcnciCg==\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.manila.keyring\"}}, \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQC93KRbAAAAABAAh3WQEraYvoGJBcWWeAglYg==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"source\": \"/etc/ceph/ceph.client.manila.keyring\"}, \"md5sum\": \"d42eb2e49e090ff13248fba0db5c0a6f\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 268, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537533026.7-38127782555482/source\", \"state\": \"file\", \"uid\": 167}", "changed: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFDOTNLUmJBQUFBQUJBQUpLL0FkT0N1YTlVT2NDR2V2ZSt6WUE9PQoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHJ3IgoJY2FwcyBvc2QgPSAiYWxsb3cgcnd4Igo=', 'failed': False, u'source': u'/etc/ceph/ceph.client.radosgw.keyring', 'item': {u'mode': u'0600', u'name': u'client.radosgw', u'key': u'AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==', u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.radosgw.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.18'}, '_ansible_ignore_errors': None, '_ansible_item_label': {u'name': u'client.radosgw', u'mode': u'0600', u'key': u'AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==', u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}}}) => {\"changed\": true, \"checksum\": \"32018e3d91a7d0c0ff43f9db5459f66424dd1f38\", \"dest\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFDOTNLUmJBQUFBQUJBQUpLL0FkT0N1YTlVT2NDR2V2ZSt6WUE9PQoJY2FwcyBtZ3IgPSAiYWxsb3cgKiIKCWNhcHMgbW9uID0gImFsbG93IHJ3IgoJY2FwcyBvc2QgPSAiYWxsb3cgcnd4Igo=\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.radosgw.keyring\"}}, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQC93KRbAAAAABAAJK/AdOCua9UOcCGeve+zYA==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"source\": \"/etc/ceph/ceph.client.radosgw.keyring\"}, \"md5sum\": \"18e4740c17d5c0f4ef7090358897cb02\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 134, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1537533027.17-108306519705611/source\", \"state\": \"file\", \"uid\": 167}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******", "Friday 21 September 2018 08:30:27 -0400 (0:00:01.491) 0:04:09.749 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mon restart script] **********************", "Friday 21 September 2018 08:30:27 -0400 (0:00:00.174) 0:04:09.923 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***", "Friday 21 September 2018 08:30:27 -0400 (0:00:00.051) 0:04:09.975 ****** ", "skipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******", "Friday 21 September 2018 08:30:27 -0400 (0:00:00.085) 0:04:10.060 ****** ", "skipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********", "Friday 21 September 2018 08:30:28 -0400 (0:00:00.083) 0:04:10.144 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******", "Friday 21 September 2018 08:30:28 -0400 (0:00:00.178) 0:04:10.323 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy osd restart script] **********************", "Friday 21 September 2018 08:30:28 -0400 (0:00:00.187) 0:04:10.510 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***", "Friday 21 September 2018 08:30:28 -0400 (0:00:00.044) 0:04:10.555 ****** ", "skipping: [compute-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******", "Friday 21 September 2018 08:30:28 -0400 (0:00:00.086) 0:04:10.642 ****** ", "skipping: [compute-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********", "Friday 21 September 2018 08:30:28 -0400 (0:00:00.091) 0:04:10.734 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******", "Friday 21 September 2018 08:30:28 -0400 (0:00:00.185) 0:04:10.919 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mds restart script] **********************", "Friday 21 September 2018 08:30:28 -0400 (0:00:00.160) 0:04:11.080 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***", "Friday 21 September 2018 08:30:29 -0400 (0:00:00.048) 0:04:11.128 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******", "Friday 21 September 2018 08:30:29 -0400 (0:00:00.059) 0:04:11.188 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********", "Friday 21 September 2018 08:30:29 -0400 (0:00:00.060) 0:04:11.248 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******", "Friday 21 September 2018 08:30:29 -0400 (0:00:00.076) 0:04:11.325 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************", "Friday 21 September 2018 08:30:29 -0400 (0:00:00.082) 0:04:11.408 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***", "Friday 21 September 2018 08:30:29 -0400 (0:00:00.049) 0:04:11.457 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******", "Friday 21 September 2018 08:30:29 -0400 (0:00:00.056) 0:04:11.514 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********", "Friday 21 September 2018 08:30:29 -0400 (0:00:00.055) 0:04:11.569 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***", "Friday 21 September 2018 08:30:29 -0400 (0:00:00.075) 0:04:11.645 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************", "Friday 21 September 2018 08:30:29 -0400 (0:00:00.077) 0:04:11.722 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***", "Friday 21 September 2018 08:30:29 -0400 (0:00:00.047) 0:04:11.770 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***", "Friday 21 September 2018 08:30:29 -0400 (0:00:00.059) 0:04:11.829 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***", "Friday 21 September 2018 08:30:29 -0400 (0:00:00.060) 0:04:11.890 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******", "Friday 21 September 2018 08:30:29 -0400 (0:00:00.079) 0:04:11.969 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************", "Friday 21 September 2018 08:30:29 -0400 (0:00:00.079) 0:04:12.048 ****** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***", "Friday 21 September 2018 08:30:29 -0400 (0:00:00.044) 0:04:12.093 ****** ", "skipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******", "Friday 21 September 2018 08:30:30 -0400 (0:00:00.088) 0:04:12.181 ****** ", "skipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********", "Friday 21 September 2018 08:30:30 -0400 (0:00:00.094) 0:04:12.276 ****** ", "ok: [compute-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [set ceph client install 'Complete'] **************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:325", "Friday 21 September 2018 08:30:30 -0400 (0:00:00.103) 0:04:12.380 ****** ", "ok: [compute-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_client\": {\"end\": \"20180921083030Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "PLAY RECAP *********************************************************************", "ceph-0 : ok=88 changed=19 unreachable=0 failed=0 ", "compute-0 : ok=55 changed=7 unreachable=0 failed=0 ", "controller-0 : ok=121 changed=22 unreachable=0 failed=0 ", "", "", "INSTALLER STATUS ***************************************************************", "Install Ceph Monitor : Complete (0:01:02)", "Install Ceph Manager : Complete (0:00:25)", "Install Ceph OSD : Complete (0:01:49)", "Install Ceph Client : Complete (0:00:40)", "", "Friday 21 September 2018 08:30:30 -0400 (0:00:00.074) 0:04:12.454 ****** ", "=============================================================================== "]} > > >TASK [set ceph-ansible group vars mgrs] **************************************** >Friday 21 September 2018 08:30:30 -0400 (0:04:16.645) 0:13:53.258 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate ceph-ansible group vars mgrs] *********************************** >Friday 21 September 2018 08:30:30 -0400 (0:00:00.038) 0:13:53.296 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible group vars mons] **************************************** >Friday 21 September 2018 08:30:30 -0400 (0:00:00.034) 0:13:53.331 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate ceph-ansible group vars mons] *********************************** >Friday 21 September 2018 08:30:30 -0400 (0:00:00.034) 0:13:53.366 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set_fact] **************************************************************** >Friday 21 September 2018 08:30:30 -0400 (0:00:00.030) 0:13:53.396 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Create temp file for prepare parameter] ********************************** >Friday 21 September 2018 08:30:30 -0400 (0:00:00.032) 0:13:53.429 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Create temp file for role data] ****************************************** >Friday 21 September 2018 08:30:30 -0400 (0:00:00.031) 0:13:53.460 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write ContainerImagePrepare parameter file] ****************************** >Friday 21 September 2018 08:30:31 -0400 (0:00:00.032) 0:13:53.493 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write role data file] **************************************************** >Friday 21 September 2018 08:30:31 -0400 (0:00:00.040) 0:13:53.533 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run tripleo-container-image-prepare] ************************************* >Friday 21 September 2018 08:30:31 -0400 (0:00:00.042) 0:13:53.576 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Delete param file] ******************************************************* >Friday 21 September 2018 08:30:31 -0400 (0:00:00.038) 0:13:53.614 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Delete role file] ******************************************************** >Friday 21 September 2018 08:30:31 -0400 (0:00:00.036) 0:13:53.651 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible group vars clients] ************************************* >Friday 21 September 2018 08:30:31 -0400 (0:00:00.035) 0:13:53.687 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate ceph-ansible group vars clients] ******************************** >Friday 21 September 2018 08:30:31 -0400 (0:00:00.036) 0:13:53.723 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible group vars osds] **************************************** >Friday 21 September 2018 08:30:31 -0400 (0:00:00.045) 0:13:53.768 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate ceph-ansible group vars osds] *********************************** >Friday 21 September 2018 08:30:31 -0400 (0:00:00.036) 0:13:53.805 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >PLAY [Overcloud deploy step tasks for 2] *************************************** > >PLAY [Overcloud common deploy step tasks 2] ************************************ > >TASK [Create /var/lib/tripleo-config directory] ******************************** >Friday 21 September 2018 08:30:31 -0400 (0:00:00.065) 0:13:53.871 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write the puppet step_config manifest] *********************************** >Friday 21 September 2018 08:30:31 -0400 (0:00:00.110) 0:13:53.981 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Create /var/lib/docker-puppet] ******************************************* >Friday 21 September 2018 08:30:31 -0400 (0:00:00.111) 0:13:54.092 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write docker-puppet.json file] ******************************************* >Friday 21 September 2018 08:30:31 -0400 (0:00:00.119) 0:13:54.212 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Create /var/lib/docker-config-scripts] *********************************** >Friday 21 September 2018 08:30:31 -0400 (0:00:00.102) 0:13:54.314 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >Friday 21 September 2018 08:30:31 -0400 (0:00:00.104) 0:13:54.419 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write docker config scripts] ********************************************* >Friday 21 September 2018 08:30:32 -0400 (0:00:00.111) 0:13:54.530 ****** >skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "(cellv2) Running cell_v2 host discovery"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr "," " "); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +"%s") + ${timeout} ))\necho "(cellv2) Waiting ${timeout} seconds for hosts to register"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo "(cellv2) compute node $host has registered"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in "${!discoverable_hosts[@]}"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo "(cellv2) compute node $host has not registered"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +"%s") ))\n if (( $finished == 1 )); then\n echo "(cellv2) All nodes registered"\n break\n elif (( $remaining <= 0 )); then\n echo "(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless"\n echo "(cellv2) Expected host list:" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\n echo "(cellv2) Detected host list:" $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\' | sort -u | tr \'\\n\', \' \')\n break\n else\n echo "(cellv2) Waiting ${remaining} seconds for hosts to register"\n sleep $loop_wait\n fi\ndone\necho "(cellv2) Running host discovery..."\nsu nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose"\n', 'mode': u'0700'}, 'key': u'nova_api_discover_hosts.sh'}) => {"changed": false, "item": {"key": "nova_api_discover_hosts.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "Check if secret already exists"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo "Failed to check secrets, check if Barbican in enabled and responding properly"\n exit $rc;\nfi\nif [ -z "$secret_href" ]; then\n echo "Create new secret"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type="application/octet-stream" --algorithm aes --bit-length 256 --mode ctr key -f value -c "Order href")\nfi\n', 'mode': u'0700'}, 'key': u'create_swift_secret.sh'}) => {"changed": false, "item": {"key": "create_swift_secret.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": false, "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho "retrieve key_id"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ "$secret_href" ]; then\n echo "set key_id in keymaster.conf"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c "Secret href")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo "no key, wait for $loop_wait and check again"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho "Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly"\nexit 1\n', 'mode': u'0700'}, 'key': u'set_swift_keymaster_key_id.sh'}) => {"changed": false, "item": {"key": "set_swift_keymaster_key_id.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-\'\'}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho "{\\"step\\": ${STEP}}" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e "${CONFIG}"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n', 'mode': u'0700'}, 'key': u'docker_puppet_apply.sh'}) => {"changed": false, "item": {"key": "docker_puppet_apply.sh", "value": {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == "default" {print $4}\')\nif [ "$DEFID" ]; then\n echo "(cellv2) Updating default cell_v2 cell $DEFID"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default"\nelse\n echo "(cellv2) Creating default cell_v2 cell"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 create_cell --name=default"\nfi\n', 'mode': u'0700'}, 'key': u'nova_api_ensure_default_cell.sh'}) => {"changed": false, "item": {"key": "nova_api_ensure_default_cell.sh", "value": {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": false, "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'content': u'#!/usr/bin/env python\n#\n# Copyright 2018 Red Hat Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the "License"); you may\n# not use this file except in compliance with the License. You may obtain\n# a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations\n# under the License.\nfrom __future__ import print_function\nimport logging\nimport os\nimport pwd\nimport stat\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\nLOG = logging.getLogger(\'nova_statedir\')\n\n\nclass PathManager(object):\n """Helper class to manipulate ownership of a given path"""\n def __init__(self, path):\n self.path = path\n self._update()\n\n def _update(self):\n statinfo = os.stat(self.path)\n self.is_dir = stat.S_ISDIR(statinfo.st_mode)\n self.uid = statinfo.st_uid\n self.gid = statinfo.st_gid\n\n def __str__(self):\n return "uid: {} gid: {} path: {}{}".format(\n self.uid,\n self.gid,\n self.path,\n \'/\' if self.is_dir else \'\'\n )\n\n def has_owner(self, uid, gid):\n return self.uid == uid and self.gid == gid\n\n def has_either(self, uid, gid):\n return self.uid == uid or self.gid == gid\n\n def chown(self, uid, gid):\n target_uid = -1\n target_gid = -1\n if self.uid != uid:\n target_uid = uid\n if self.gid != gid:\n target_gid = gid\n if (target_uid, target_gid) != (-1, -1):\n LOG.info(\'Changing ownership of %s from %d:%d to %d:%d\',\n self.path,\n self.uid,\n self.gid,\n self.uid if target_uid == -1 else target_uid,\n self.gid if target_gid == -1 else target_gid)\n os.chown(self.path, target_uid, target_gid)\n self._update()\n else:\n LOG.info(\'Ownership of %s already %d:%d\',\n self.path,\n uid,\n gid)\n\n\nclass NovaStatedirOwnershipManager(object):\n """Class to manipulate the ownership of the nova statedir (/var/lib/nova).\n\n The nova uid/gid differ on the host and container images. An upgrade\n that switches from host systemd services to docker requires a change in\n ownership. Previously this was a naive recursive chown, however this\n causes issues if nova instance are shared via an NFS mount: any open\n filehandles in qemu/libvirt fail with an I/O error (LP1778465).\n\n Instead the upgrade/FFU ansible tasks now lay down a marker file when\n stopping and disabling the host systemd services. We use this file to\n determine the host nova uid/gid. We then walk the tree and update any\n files that have the host uid/gid to the docker nova uid/gid. As files\n owned by root/qemu etc... are ignored this avoids the issues with open\n filehandles. The marker is removed once the tree has been walked.\n\n For subsequent runs, or for a new deployment, we simply ensure that the\n docker nova user/group owns all directories. This is required as the\n directories are created with root ownership in host_prep_tasks (the\n docker nova uid/gid is not known in this context).\n """\n def __init__(self, statedir, upgrade_marker=\'upgrade_marker\',\n nova_user=\'nova\'):\n self.statedir = statedir\n self.nova_user = nova_user\n\n self.upgrade_marker_path = os.path.join(statedir, upgrade_marker)\n self.upgrade = os.path.exists(self.upgrade_marker_path)\n\n self.target_uid, self.target_gid = self._get_nova_ids()\n self.previous_uid, self.previous_gid = self._get_previous_nova_ids()\n self.id_change = (self.target_uid, self.target_gid) != \\\n (self.previous_uid, self.previous_gid)\n\n def _get_nova_ids(self):\n nova_uid, nova_gid = pwd.getpwnam(self.nova_user)[2:4]\n return nova_uid, nova_gid\n\n def _get_previous_nova_ids(self):\n if self.upgrade:\n statinfo = os.stat(self.upgrade_marker_path)\n return statinfo.st_uid, statinfo.st_gid\n else:\n return self._get_nova_ids()\n\n def _walk(self, top):\n for f in os.listdir(top):\n pathname = os.path.join(top, f)\n\n if pathname == self.upgrade_marker_path:\n continue\n\n pathinfo = PathManager(pathname)\n LOG.info("Checking %s", pathinfo)\n if pathinfo.is_dir:\n # Always chown the directories\n pathinfo.chown(self.target_uid, self.target_gid)\n self._walk(pathname)\n elif self.id_change:\n # Only chown files if it\'s an upgrade and the file is owned by\n # the host nova uid/gid\n pathinfo.chown(\n self.target_uid if pathinfo.uid == self.previous_uid\n else pathinfo.uid,\n self.target_gid if pathinfo.gid == self.previous_gid\n else pathinfo.gid\n )\n\n def run(self):\n LOG.info(\'Applying nova statedir ownership\')\n LOG.info(\'Target ownership for %s: %d:%d\',\n self.statedir,\n self.target_uid,\n self.target_gid)\n\n pathinfo = PathManager(self.statedir)\n LOG.info("Checking %s", pathinfo)\n pathinfo.chown(self.target_uid, self.target_gid)\n\n self._walk(self.statedir)\n\n if self.upgrade:\n LOG.info(\'Removing upgrade_marker %s\',\n self.upgrade_marker_path)\n os.unlink(self.upgrade_marker_path)\n\n LOG.info(\'Nova statedir ownership complete\')\n\nif __name__ == \'__main__\':\n NovaStatedirOwnershipManager(\'/var/lib/nova\').run()\n', 'mode': u'0700'}, 'key': u'nova_statedir_ownership.py'}) => {"changed": false, "item": {"key": "nova_statedir_ownership.py", "value": {"content": "#!/usr/bin/env python\n#\n# Copyright 2018 Red Hat Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may\n# not use this file except in compliance with the License. You may obtain\n# a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations\n# under the License.\nfrom __future__ import print_function\nimport logging\nimport os\nimport pwd\nimport stat\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\nLOG = logging.getLogger('nova_statedir')\n\n\nclass PathManager(object):\n \"\"\"Helper class to manipulate ownership of a given path\"\"\"\n def __init__(self, path):\n self.path = path\n self._update()\n\n def _update(self):\n statinfo = os.stat(self.path)\n self.is_dir = stat.S_ISDIR(statinfo.st_mode)\n self.uid = statinfo.st_uid\n self.gid = statinfo.st_gid\n\n def __str__(self):\n return \"uid: {} gid: {} path: {}{}\".format(\n self.uid,\n self.gid,\n self.path,\n '/' if self.is_dir else ''\n )\n\n def has_owner(self, uid, gid):\n return self.uid == uid and self.gid == gid\n\n def has_either(self, uid, gid):\n return self.uid == uid or self.gid == gid\n\n def chown(self, uid, gid):\n target_uid = -1\n target_gid = -1\n if self.uid != uid:\n target_uid = uid\n if self.gid != gid:\n target_gid = gid\n if (target_uid, target_gid) != (-1, -1):\n LOG.info('Changing ownership of %s from %d:%d to %d:%d',\n self.path,\n self.uid,\n self.gid,\n self.uid if target_uid == -1 else target_uid,\n self.gid if target_gid == -1 else target_gid)\n os.chown(self.path, target_uid, target_gid)\n self._update()\n else:\n LOG.info('Ownership of %s already %d:%d',\n self.path,\n uid,\n gid)\n\n\nclass NovaStatedirOwnershipManager(object):\n \"\"\"Class to manipulate the ownership of the nova statedir (/var/lib/nova).\n\n The nova uid/gid differ on the host and container images. An upgrade\n that switches from host systemd services to docker requires a change in\n ownership. Previously this was a naive recursive chown, however this\n causes issues if nova instance are shared via an NFS mount: any open\n filehandles in qemu/libvirt fail with an I/O error (LP1778465).\n\n Instead the upgrade/FFU ansible tasks now lay down a marker file when\n stopping and disabling the host systemd services. We use this file to\n determine the host nova uid/gid. We then walk the tree and update any\n files that have the host uid/gid to the docker nova uid/gid. As files\n owned by root/qemu etc... are ignored this avoids the issues with open\n filehandles. The marker is removed once the tree has been walked.\n\n For subsequent runs, or for a new deployment, we simply ensure that the\n docker nova user/group owns all directories. This is required as the\n directories are created with root ownership in host_prep_tasks (the\n docker nova uid/gid is not known in this context).\n \"\"\"\n def __init__(self, statedir, upgrade_marker='upgrade_marker',\n nova_user='nova'):\n self.statedir = statedir\n self.nova_user = nova_user\n\n self.upgrade_marker_path = os.path.join(statedir, upgrade_marker)\n self.upgrade = os.path.exists(self.upgrade_marker_path)\n\n self.target_uid, self.target_gid = self._get_nova_ids()\n self.previous_uid, self.previous_gid = self._get_previous_nova_ids()\n self.id_change = (self.target_uid, self.target_gid) != \\\n (self.previous_uid, self.previous_gid)\n\n def _get_nova_ids(self):\n nova_uid, nova_gid = pwd.getpwnam(self.nova_user)[2:4]\n return nova_uid, nova_gid\n\n def _get_previous_nova_ids(self):\n if self.upgrade:\n statinfo = os.stat(self.upgrade_marker_path)\n return statinfo.st_uid, statinfo.st_gid\n else:\n return self._get_nova_ids()\n\n def _walk(self, top):\n for f in os.listdir(top):\n pathname = os.path.join(top, f)\n\n if pathname == self.upgrade_marker_path:\n continue\n\n pathinfo = PathManager(pathname)\n LOG.info(\"Checking %s\", pathinfo)\n if pathinfo.is_dir:\n # Always chown the directories\n pathinfo.chown(self.target_uid, self.target_gid)\n self._walk(pathname)\n elif self.id_change:\n # Only chown files if it's an upgrade and the file is owned by\n # the host nova uid/gid\n pathinfo.chown(\n self.target_uid if pathinfo.uid == self.previous_uid\n else pathinfo.uid,\n self.target_gid if pathinfo.gid == self.previous_gid\n else pathinfo.gid\n )\n\n def run(self):\n LOG.info('Applying nova statedir ownership')\n LOG.info('Target ownership for %s: %d:%d',\n self.statedir,\n self.target_uid,\n self.target_gid)\n\n pathinfo = PathManager(self.statedir)\n LOG.info(\"Checking %s\", pathinfo)\n pathinfo.chown(self.target_uid, self.target_gid)\n\n self._walk(self.statedir)\n\n if self.upgrade:\n LOG.info('Removing upgrade_marker %s',\n self.upgrade_marker_path)\n os.unlink(self.upgrade_marker_path)\n\n LOG.info('Nova statedir ownership complete')\n\nif __name__ == '__main__':\n NovaStatedirOwnershipManager('/var/lib/nova').run()\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} > >TASK [Set docker_config_default fact] ****************************************** >Friday 21 September 2018 08:30:32 -0400 (0:00:00.156) 0:13:54.686 ****** >skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > >TASK [Set docker_startup_configs_with_default fact] **************************** >Friday 21 September 2018 08:30:32 -0400 (0:00:00.190) 0:13:54.877 ****** >skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > >TASK [Write docker-container-startup-configs] ********************************** >Friday 21 September 2018 08:30:32 -0400 (0:00:00.187) 0:13:55.064 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write per-step docker-container-startup-configs] ************************* >Friday 21 September 2018 08:30:32 -0400 (0:00:00.117) 0:13:55.181 ****** >skipping: [compute-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'nova_statedir_owner': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1', 'command': u'/docker-config-scripts/nova_statedir_ownership.py', 'user': u'root', 'volumes': [u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/docker-config-scripts/:/docker-config-scripts/'], 'detach': False, 'privileged': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_libvirt': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/var/log/containers/libvirt:/var/log/libvirt', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro', u'/var/lib/vhost_sockets:/var/lib/vhost_sockets', u'/sys/fs/selinux:/sys/fs/selinux'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_virtlogd': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/etc/libvirt/qemu:/etc/libvirt/qemu:ro', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu'], 'net': u'host', 'privileged': True, 'restart': u'always'}}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_statedir_owner": {"command": "/docker-config-scripts/nova_statedir_ownership.py", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/lib/nova:/var/lib/nova:shared", "/var/lib/docker-config-scripts/:/docker-config-scripts/"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}}, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item={'value': {}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'cinder_volume_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_image_tag': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_data_ownership': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'command': [u'chown', u'-R', u'mysql:', u'/var/lib/mysql'], 'user': u'root', 'volumes': [u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'redis_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_bootstrap': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'DB_MAX_TIMEOUT=60', u'DB_CLUSTERCHECK_PASSWORD=01uMEtrcy1XQLgnZ0spBcEeFG', u'DB_ROOT_PASSWORD=VmByi3iDWE'], 'command': [u'bash', u'-ec', u'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'${DB_CLUSTERCHECK_PASSWORD}\';"\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" shutdown'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'haproxy_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'cinder_backup_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_bootstrap': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'RABBITMQ_CLUSTER_COOKIE=bo2CgGlbFlVu6tTAeUPw'], 'volumes': [u'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro', u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/var/lib/rabbitmq:/var/lib/rabbitmq'], 'net': u'host', 'privileged': False}, 'memcached': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=01uMEtrcy1XQLgnZ0spBcEeFG", "DB_ROOT_PASSWORD=VmByi3iDWE"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=bo2CgGlbFlVu6tTAeUPw"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}}, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'nova_placement': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd', u'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'restart': u'always'}, 'swift_rsync_fix': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'sed -i "/pid file/d" /var/lib/kolla/config_files/src/etc/rsyncd.conf'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:rw'], 'net': u'host', 'detach': False}, 'nova_db_sync': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'heat_engine_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_copy_rings': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1', 'detach': False, 'command': [u'/bin/bash', u'-c', u'cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw', u'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro']}, 'nova_api_ensure_default_cell': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro'], 'net': u'host', 'detach': False}, 'keystone_cron': {'start_order': 4, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/bin/bash', u'-c', u'/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'panko_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/panko/etc/panko:/etc/panko:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_backup_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'nova_api_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'keystone_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/usr/bin/bootstrap_host_exec', u'keystone', u'/usr/local/bin/kolla_start'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'detach': False, 'privileged': False}, 'ceilometer_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'start_order': 0, 'volumes': [u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'user': u'root'}, 'keystone': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1', 'command': u'/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'cinder_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_api', u"su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'nova_api_map_cell0': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'glance_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': u"/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'detach': False, 'privileged': False}, 'neutron_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1', 'command': [u'/usr/bin/bootstrap_host_exec', u'neutron_api', u'neutron-db-manage', u'upgrade', u'heads'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro', u'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'sahara_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'detach': False, 'privileged': False}, 'keystone_bootstrap': {'action': u'exec', 'start_order': 3, 'command': [u'keystone', u'/usr/bin/bootstrap_host_exec', u'keystone', u'keystone-manage', u'bootstrap', u'--bootstrap-password', u'wIdMrXYZVQy05wYJArw8Vja2H'], 'user': u'root'}, 'horizon': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'ENABLE_IRONIC=yes', u'ENABLE_MANILA=yes', u'ENABLE_HEAT=yes', u'ENABLE_MISTRAL=yes', u'ENABLE_OCTAVIA=yes', u'ENABLE_SAHARA=yes', u'ENABLE_CLOUDKITTY=no', u'ENABLE_FREEZER=no', u'ENABLE_FWAAS=no', u'ENABLE_KARBOR=no', u'ENABLE_DESIGNATE=no', u'ENABLE_MAGNUM=no', u'ENABLE_MURANO=no', u'ENABLE_NEUTRON_LBAAS=no', u'ENABLE_SEARCHLIGHT=no', u'ENABLE_SENLIN=no', u'ENABLE_SOLUM=no', u'ENABLE_TACKER=no', u'ENABLE_TROVE=no', u'ENABLE_WATCHER=no', u'ENABLE_ZAQAR=no', u'ENABLE_ZUN=no'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/www/:/var/www/:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_setup_srv': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1', 'command': [u'chown', u'-R', u'swift:', u'/srv/node'], 'user': u'root', 'volumes': [u'/srv/node:/srv/node']}}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-20.1", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "wIdMrXYZVQy05wYJArw8Vja2H"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_rsync_fix": {"command": ["/bin/bash", "-c", "sed -i \"/pid file/d\" /var/lib/kolla/config_files/src/etc/rsyncd.conf"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:rw"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1", "user": "root", "volumes": ["/srv/node:/srv/node"]}}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'ceilometer_agent_compute': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/run/libvirt:/var/run/libvirt:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_libvirt_init_secret': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '8fedf068-bd95-11e8-ba69-5254006eda59' --base64 'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw=='"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt'], 'detach': False, 'privileged': False}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_migration_target': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ssh/:/host-ssh/:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_compute': {'ipc': u'host', 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/dev:/dev', u'/lib/modules:/lib/modules:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/libvirt:/var/lib/libvirt', u'/sys/class/net:/sys/class/net', u'/sys/bus/pci:/sys/bus/pci'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '8fedf068-bd95-11e8-ba69-5254006eda59' --base64 'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}}, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item={'value': {'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}}, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'gnocchi_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/log/gnocchi'], 'user': u'root', 'volumes': [u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd']}, 'mysql_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/mysql:/var/lib/mysql:rw'], 'net': u'host', 'detach': False}, 'gnocchi_init_lib': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/lib/gnocchi'], 'user': u'root', 'volumes': [u'/var/lib/gnocchi:/var/lib/gnocchi']}, 'cinder_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'user': u'root'}, 'create_dnsmasq_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-20.1', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'panko_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R panko:panko /var/log/panko'], 'user': u'root', 'volumes': [u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd']}, 'redis_init_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'config_volume': u'redis_init_bundle', 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_scheduler_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'user': u'root'}, 'glance_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R glance:glance /var/log/glance'], 'privileged': False, 'volumes': [u'/var/log/containers/glance:/var/log/glance'], 'user': u'root'}, 'clustercheck': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'restart': u'always'}, 'haproxy_init_bundle': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro', u'/etc/sysconfig:/etc/sysconfig:rw', u'/usr/libexec/iptables:/usr/libexec/iptables:ro', u'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False, 'privileged': True}, 'neutron_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R neutron:neutron /var/log/neutron'], 'privileged': False, 'volumes': [u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd'], 'user': u'root'}, 'mysql_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'config_volume': u'mysql', 'command': [u'/usr/bin/bootstrap_host_exec', u'mysql', u'if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo "galera-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'rabbitmq_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/bin/true:/bin/epmd'], 'net': u'host', 'detach': False}, 'nova_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd'], 'user': u'root'}, 'haproxy_restart_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1', 'config_volume': u'haproxy', 'command': [u'/usr/bin/bootstrap_host_exec', u'haproxy', u'if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo "haproxy-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'create_keepalived_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-20.1', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::l3_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'rabbitmq_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1', 'config_volume': u'rabbitmq', 'command': [u'/usr/bin/bootstrap_host_exec', u'rabbitmq', u'if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'horizon_fix_perms': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'user': u'root', 'volumes': [u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard']}, 'aodh_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R aodh:aodh /var/log/aodh'], 'user': u'root', 'volumes': [u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd']}, 'nova_metadata_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova'], 'user': u'root'}, 'redis_restart_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1', 'config_volume': u'redis', 'command': [u'/usr/bin/bootstrap_host_exec', u'redis', u'if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo "redis-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'heat_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R heat:heat /var/log/heat'], 'user': u'root', 'volumes': [u'/var/log/containers/heat:/var/log/heat']}, 'nova_placement_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'start_order': 1, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd'], 'user': u'root'}, 'keystone_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R keystone:keystone /var/log/keystone'], 'start_order': 1, 'volumes': [u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd'], 'user': u'root'}}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-20.1", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-20.1", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'cinder_volume_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'gnocchi_api': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'gnocchi_statsd': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_backup', u'if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo "openstack-cinder-backup restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_metricd': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_discover_hosts': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro'], 'net': u'host', 'detach': False}, 'ceilometer_gnocchi_upgrade': {'start_order': 99, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1', 'command': [u'/usr/bin/bootstrap_host_exec', u'ceilometer_agent_central', u"su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 30; done; exit 1'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_volume', u'if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'cinder_backup_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'gnocchi_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro'], 'net': u'host', 'detach': False, 'privileged': False}}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 30; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1", "net": "host", "privileged": false, "start_order": 99, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", "net": "host", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'swift_container_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_evaluator': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'/run:/run'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'cinder_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_proxy': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/run:/run', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'restart': u'always'}, 'neutron_dhcp': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', u'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_metadata_agent': {'start_order': 10, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'ceilometer_agent_central': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'keystone_refresh': {'action': u'exec', 'start_order': 1, 'command': [u'keystone', u'pkill', u'--signal', u'USR1', u'httpd'], 'user': u'root'}, 'swift_account_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_notifier': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_consoleauth': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'glance_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_reaper': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'ceilometer_agent_notification': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_vnc_proxy': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_rsync': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'aodh_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_metadata': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_l3_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro', u'/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro', u'/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_conductor': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'sahara_api': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_container_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_expirer': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'heat_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'panko_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_listener': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_api': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'heat_api_cfn': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} > >TASK [Create /var/lib/kolla/config_files directory] **************************** >Friday 21 September 2018 08:30:33 -0400 (0:00:00.768) 0:13:55.950 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write kolla config json files] ******************************************* >Friday 21 September 2018 08:30:33 -0400 (0:00:00.108) 0:13:56.058 ****** >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': u'/var/lib/kolla/config_files/iscsid.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/libvirtd', 'permissions': [{'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/nova_libvirt.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_libvirt.json", "value": {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ssh/', 'owner': u'root', 'perm': u'0600', 'source': u'/host-ssh/ssh_host_*_key'}], 'command': u'/usr/sbin/sshd -D -p 2022'}, 'key': u'/var/lib/kolla/config_files/nova-migration-target.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova-migration-target.json", "value": {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf'}, 'key': u'/var/lib/kolla/config_files/nova_virtlogd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_virtlogd.json", "value": {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/nova-compute ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/nova_compute.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_compute.json", "value": {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_compute.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/keystone.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/keystone.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/lib/cinder', 'recurse': True}, {'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_backup.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_backup.json", "value": {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/swift_proxy_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-auditor /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_auditor.json", "value": {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-replicator /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_replicator.json", "value": {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-notifier', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_notifier.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_notifier.json", "value": {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-scheduler ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_scheduler.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_scheduler.json", "value": {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/certs/neutron.crt'}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/private/neutron.key'}]}, 'key': u'/var/lib/kolla/config_files/neutron_dhcp.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_dhcp.json", "value": {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg', 'permissions': [{'owner': u'haproxy:haproxy', 'path': u'/var/lib/haproxy', 'recurse': True}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/certs/haproxy/*', 'optional': True, 'perm': u'0600'}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/private/haproxy/*', 'optional': True, 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/haproxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/haproxy.json", "value": {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_db_sync.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "value": {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-reaper /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_reaper.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_reaper.json", "value": {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/sahara-engine.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/sahara-engine.json", "value": {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'redis:redis', 'path': u'/var/run/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/lib/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/log/redis', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/redis.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/redis.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-novncproxy --web /usr/share/novnc/ ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}, {'owner': u'root:nova', 'path': u'/etc/pki/tls/private/novnc_proxy.key'}]}, 'key': u'/var/lib/kolla/config_files/nova_vnc_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "value": {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "root:nova", "path": "/etc/pki/tls/private/novnc_proxy.key"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf', 'permissions': [{'owner': u'glance:glance', 'path': u'/var/lib/glance', 'recurse': True}, {'owner': u'glance:glance', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/glance_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/glance_api.json", "value": {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-auditor /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_auditor.json", "value": {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-panko/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log', 'permissions': [{'owner': u'root:ceilometer', 'path': u'/etc/panko', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_notification.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "value": {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_expirer.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_expirer.json", "value": {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_central.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf'}, 'key': u'/var/lib/kolla/config_files/swift_rsync.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_rsync.json", "value": {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-server /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_server.json", "value": {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_proxy.json", "value": {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-updater /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_updater.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_updater.json", "value": {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/xinetd -dontfork'}, 'key': u'/var/lib/kolla/config_files/clustercheck.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/clustercheck.json", "value": {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'mysql:mysql', 'path': u'/var/log/mysql', 'recurse': True}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/certs/mysql.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/private/mysql.key', 'optional': True, 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/mysql.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/mysql.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_placement.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_placement.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/sahara-api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/sahara-api.json", "value": {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'keystone:keystone', 'path': u'/var/log/keystone', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/keystone_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/keystone_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/neutron_server_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-replicator /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_replicator.json", "value": {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-conductor ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_conductor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_conductor.json", "value": {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api_cfn.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api_cfn.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-api-metadata ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_metadata.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_metadata.json", "value": {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'rabbitmq:rabbitmq', 'path': u'/var/lib/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/var/log/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/certs/rabbitmq.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/private/rabbitmq.key', 'optional': True, 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/rabbitmq.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/rabbitmq.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-consoleauth ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_consoleauth.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_consoleauth.json", "value": {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-updater /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_updater.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_updater.json", "value": {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_api.json", "value": {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_scheduler.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_scheduler.json", "value": {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-metricd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_metricd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_metricd.json", "value": {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_metadata_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "value": {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-replicator /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_replicator.json", "value": {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_engine.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_engine.json", "value": {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-server /etc/swift/object-server.conf', 'permissions': [{'owner': u'swift:swift', 'path': u'/var/cache/swift', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/swift_object_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_server.json", "value": {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'stunnel /etc/stunnel/stunnel.conf', 'permissions': [{'owner': u'root:root', 'path': u'/etc/pki/tls/certs/redis.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'root:root', 'path': u'/etc/pki/tls/private/redis.key', 'optional': True, 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/redis_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/redis_tls_proxy.json", "value": {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"optional": true, "owner": "root:root", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "root:root", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_volume.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_volume.json", "value": {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'panko:panko', 'path': u'/var/log/panko', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/panko_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/panko_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-auditor /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_auditor.json", "value": {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_l3_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_l3_agent.json", "value": {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-listener', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_listener.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_listener.json", "value": {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-server /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_server.json", "value": {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-evaluator', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_evaluator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_evaluator.json", "value": {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/glance_api_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': u'/var/lib/kolla/config_files/iscsid.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-statsd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_statsd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_statsd.json", "value": {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'apache:apache', 'path': u'/var/log/horizon/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/etc/openstack-dashboard/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/', 'recurse': False}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/', 'recurse': False}]}, 'key': u'/var/lib/kolla/config_files/horizon.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/horizon.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}}, "skip_reason": "Conditional result was False"} > >TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >Friday 21 September 2018 08:30:34 -0400 (0:00:00.687) 0:13:56.746 ****** > [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >TASK [Write docker-puppet-tasks json files] ************************************ >Friday 21 September 2018 08:30:34 -0400 (0:00:00.096) 0:13:56.843 ****** >skipping: [controller-0] => (item={'value': [{'puppet_tags': u'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain', 'config_volume': u'keystone_init_tasks', 'step_config': u'include ::tripleo::profile::base::keystone', 'config_image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1'}], 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': [{'puppet_tags': u'cinder_config,cinder_type,file,concat,file_line', 'config_volume': u'cinder_init_tasks', 'step_config': u'include ::tripleo::profile::base::cinder::api', 'config_image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1', 'volumes': [u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro']}], 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": [{"config_image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", "config_volume": "cinder_init_tasks", "puppet_tags": "cinder_config,cinder_type,file,concat,file_line", "step_config": "include ::tripleo::profile::base::cinder::api", "volumes": ["/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro"]}]}, "skip_reason": "Conditional result was False"} > >TASK [Set host puppet debugging fact string] *********************************** >Friday 21 September 2018 08:30:34 -0400 (0:00:00.101) 0:13:56.944 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write the config_step hieradata] ***************************************** >Friday 21 September 2018 08:30:34 -0400 (0:00:00.105) 0:13:57.050 ****** >changed: [controller-0] => {"changed": true, "checksum": "f17091ee142621a3c8290c8c96b5b52d67b3a864", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "0c07a8d2f57375a6b7ce729be89e77fb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537533034.61-153179258734839/source", "state": "file", "uid": 0} >changed: [compute-0] => {"changed": true, "checksum": "f17091ee142621a3c8290c8c96b5b52d67b3a864", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "0c07a8d2f57375a6b7ce729be89e77fb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537533034.64-165664421485611/source", "state": "file", "uid": 0} >changed: [ceph-0] => {"changed": true, "checksum": "f17091ee142621a3c8290c8c96b5b52d67b3a864", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "0c07a8d2f57375a6b7ce729be89e77fb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537533034.68-202586753269782/source", "state": "file", "uid": 0} > >TASK [Run puppet host configuration for step 2] ******************************** >Friday 21 September 2018 08:30:35 -0400 (0:00:00.662) 0:13:57.713 ****** >changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} > >TASK [Debug output for task which failed: Run puppet host configuration for step 2] *** >Friday 21 September 2018 08:30:50 -0400 (0:00:15.702) 0:14:13.415 ****** >ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.28 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller2]/ensure: created", > "Notice: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]/ensure: created", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/seltype: seltype changed 'locale_t' to 'etc_t'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 4.19 seconds", > "Changes:", > " Total: 4", > "Events:", > " Success: 4", > "Resources:", > " Corrective change: 1", > " Total: 216", > " Out of sync: 4", > " Changed: 4", > "Time:", > " Concat fragment: 0.00", > " Concat file: 0.00", > " File line: 0.00", > " Schedule: 0.00", > " Cron: 0.00", > " Anchor: 0.00", > " User: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.01", > " Augeas: 0.02", > " Firewall: 0.02", > " File: 0.15", > " Service: 0.24", > " Package: 0.37", > " Pcmk property: 0.40", > " Exec: 0.85", > " Pcmk resource default: 1.17", > " Last run: 1537533050", > " Config retrieval: 3.85", > " Total: 7.08", > " Filebucket: 0.00", > "Version:", > " Config: 1537533042", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.89 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute2]/ensure: created", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/seltype: seltype changed 'locale_t' to 'etc_t'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 1.28 seconds", > "Changes:", > " Total: 3", > "Events:", > " Success: 3", > "Resources:", > " Corrective change: 1", > " Total: 140", > " Out of sync: 3", > " Changed: 3", > "Time:", > " Concat file: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.00", > " Firewall: 0.01", > " Augeas: 0.02", > " File: 0.04", > " Service: 0.16", > " Exec: 0.19", > " Package: 0.24", > " Last run: 1537533045", > " Config retrieval: 2.23", > " Total: 2.90", > " Filebucket: 0.00", > " Concat fragment: 0.00", > "Version:", > " Config: 1537533041", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 2.00 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage2]/ensure: created", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/seltype: seltype changed 'locale_t' to 'etc_t'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 1.34 seconds", > "Changes:", > " Total: 3", > "Events:", > " Success: 3", > "Resources:", > " Corrective change: 1", > " Total: 134", > " Out of sync: 3", > " Changed: 3", > "Time:", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.01", > " Sysctl: 0.01", > " Firewall: 0.01", > " Augeas: 0.02", > " File: 0.05", > " Service: 0.11", > " Exec: 0.21", > " Package: 0.24", > " Last run: 1537533045", > " Config retrieval: 2.29", > " Total: 2.93", > " Filebucket: 0.00", > "Version:", > " Config: 1537533042", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} > >TASK [Run docker-puppet tasks (generate config) during step 2] ***************** >Friday 21 September 2018 08:30:51 -0400 (0:00:00.146) 0:14:13.562 ****** >skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > >TASK [Debug output for task which failed: Run docker-puppet tasks (generate config) during step 2] *** >Friday 21 September 2018 08:30:51 -0400 (0:00:00.105) 0:14:13.667 ****** >skipping: [controller-0] => {} >skipping: [compute-0] => {} >skipping: [ceph-0] => {} > >TASK [Start containers for step 2] ********************************************* >Friday 21 September 2018 08:30:51 -0400 (0:00:00.103) 0:14:13.771 ****** >ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > > >TASK [Debug output for task which failed: Start containers for step 2] ********* >Friday 21 September 2018 08:38:09 -0400 (0:07:18.226) 0:21:31.998 ****** >ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-scheduler ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-scheduler", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "86a0e618a180: Already exists", > "dfa58d50e0a3: Already exists", > "020a12d8eacf: Already exists", > "3c49c66881c2: Pulling fs layer", > "3c49c66881c2: Verifying Checksum", > "3c49c66881c2: Download complete", > "3c49c66881c2: Pull complete", > "Digest: sha256:53993bd91a465703cc222a3bb30d52d8dd1a843f5f105c853b6720f7bdce1c8a", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-20.1", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-heat-engine ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-heat-engine", > "c59832cf029f: Already exists", > "a0613717215c: Pulling fs layer", > "a0613717215c: Verifying Checksum", > "a0613717215c: Download complete", > "a0613717215c: Pull complete", > "Digest: sha256:293253d6e6c55b0c5431aaf31d454d0e6d236f77028d81ff2ac5a4d2cdc20a08", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent", > "763394b9c1e7: Already exists", > "c3947f38fa77: Pulling fs layer", > "c3947f38fa77: Download complete", > "c3947f38fa77: Pull complete", > "Digest: sha256:0091a360ec97a19404c03294539880fae09f058e4ed5ea372e748db6bdaa0c28", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-20.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent", > "9a7bf3184d65: Pulling fs layer", > "9a7bf3184d65: Verifying Checksum", > "9a7bf3184d65: Download complete", > "9a7bf3184d65: Pull complete", > "Digest: sha256:aea04d3162af3104729442ac490ccfb664a6b2bf7720b5569888dfcff331a578", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-20.1", > "stdout: 97cf643a1f47fa77bdb1d374ed57b16c95e9d526d59cf07aa801bc56e679e311", > "stdout: ", > "stderr: Error: unable to find resource 'galera-bundle'", > "stdout: 617077184a3c1d51b32273ba7c40b11779089fcae43526792c1aff93388ff92a", > "stdout: f70af478d8bb83527b327829ce3ebb0facd99bca1fa9a7363769e547dd92901d", > "stdout: 2f8af77938e58b0506d1f330b5a0ef13a8ce9b495a1db340a98edf4800aa976e", > "stdout: Skipping execution since this is not the bootstrap node for this service.", > "stdout: da130ae2c478946509538f3f4fbc615942144e86b0735e05fe7b7fe99b4ac965", > "stdout: c8e6a3afbbfbda6c71f178162de3b4090a09d9c9f2baad4ab1663617861eb47f", > "stdout: 9414c1d57cbfd739a1d67e518f534f085476254223e7ef409842e4b604ed2e0f", > "stdout: 3b098e9fbeaeb624b300cd365726a1615c386f4c0a0eea7375fc7bfafa5e3e5e", > "stdout: 405da0e883d3275979c8d9d3615a126b9dafec7360c65f6e06af5a08862d7a55", > "stdout: 32c3911d8455e61d0dfb6aff7925454dca2c650005954f612345300966b7803f", > "stdout: b74ce987e1d0ec3dd61a9f8fffcb756f92f393a25fc8ca0492055788c4071371", > "stdout: 3ade04171bd4545c319f3c409d93650e9d1abe5684bad02edb81ac363c7b3660", > "stdout: Debug: Runtime environment: puppet_version=4.8.2, ruby_version=2.0.0, run_mode=user, default_encoding=US-ASCII", > "Debug: Evicting cache entry for environment 'production'", > "Debug: Caching environment 'production' (ttl = 0 sec)", > "Debug: Loading external facts from /etc/puppet/modules/openstacklib/facts.d", > "Debug: Loading external facts from /var/lib/puppet/facts.d", > "Info: Loading facts", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/pe_version.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/root_home.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/facter_dot_d.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/puppet_settings.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/service_provider.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/package_provider.rb", > "Debug: Loading facts from /etc/puppet/modules/redis/lib/facter/redis_server_version.rb", > "Debug: Loading facts from /etc/puppet/modules/ipaclient/lib/facter/ipa_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/ipaclient/lib/facter/sssd_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/rabbitmq_version.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/rabbitmq_nodename.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/erl_ssl_path.rb", > "Debug: Loading facts from /etc/puppet/modules/elasticsearch/lib/facter/es_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandraminorversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandracmsmaxheapsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandrarelease.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandrapatchversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandraheapnewsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandramajorversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandramaxheapsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandracmsheapnewsize.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysqld_version.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysql_server_id.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysql_version.rb", > "Debug: Loading facts from /etc/puppet/modules/vcsrepo/lib/facter/vcsrepo_svn_ver.rb", > "Debug: Loading facts from /etc/puppet/modules/pacemaker/lib/facter/pacemaker_node_name.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_major_version.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_patch_level.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_version.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_default_home.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_libjvm_path.rb", > "Debug: Loading facts from /etc/puppet/modules/ssh/lib/facter/ssh_client_version.rb", > "Debug: Loading facts from /etc/puppet/modules/ssh/lib/facter/ssh_server_version.rb", > "Debug: Loading facts from /etc/puppet/modules/staging/lib/facter/staging_http_get.rb", > "Debug: Loading facts from /etc/puppet/modules/staging/lib/facter/staging_windir.rb", > "Debug: Loading facts from /etc/puppet/modules/haproxy/lib/facter/haproxy_version.rb", > "Debug: Loading facts from /etc/puppet/modules/sensu/lib/facter/sensu_version.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/pci_address.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/ovs.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/ovs_uuid.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/iptables_version.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/iptables_persistent_version.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/ip6tables_version.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_exec_path.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_version.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_html_path.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/alt_fqdns.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/netmask_ipv6.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/nic_alias.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/docker_group_gid.rb", > "Debug: Loading facts from /etc/puppet/modules/nova/lib/facter/ipa_hostname.rb", > "Debug: Loading facts from /etc/puppet/modules/nova/lib/facter/libvirt_uuid.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_package_type.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_workers.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_service_default.rb", > "Debug: Loading facts from /etc/puppet/modules/archive/lib/facter/archive_windir.rb", > "Debug: Loading facts from /etc/puppet/modules/systemd/lib/facter/systemd.rb", > "Debug: Loading facts from /etc/puppet/modules/apache/lib/facter/apache_version.rb", > "Debug: Loading facts from /etc/puppet/modules/collectd/lib/facter/python_dir.rb", > "Debug: Loading facts from /etc/puppet/modules/collectd/lib/facter/collectd_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/pe_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/root_home.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/facter_dot_d.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/puppet_settings.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/service_provider.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/package_provider.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/redis/lib/facter/redis_server_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ipaclient/lib/facter/ipa_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ipaclient/lib/facter/sssd_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/rabbitmq_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/rabbitmq_nodename.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/erl_ssl_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/elasticsearch/lib/facter/es_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandraminorversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandracmsmaxheapsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandrarelease.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandrapatchversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandraheapnewsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandramajorversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandramaxheapsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandracmsheapnewsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysqld_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysql_server_id.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysql_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vcsrepo/lib/facter/vcsrepo_svn_ver.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/pacemaker/lib/facter/pacemaker_node_name.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_major_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_patch_level.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_default_home.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_libjvm_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ssh/lib/facter/ssh_client_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ssh/lib/facter/ssh_server_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/staging/lib/facter/staging_http_get.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/staging/lib/facter/staging_windir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/haproxy/lib/facter/haproxy_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/sensu/lib/facter/sensu_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/pci_address.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/ovs.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/ovs_uuid.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/iptables_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/iptables_persistent_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/ip6tables_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_exec_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_html_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/alt_fqdns.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/netmask_ipv6.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/nic_alias.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/docker_group_gid.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/nova/lib/facter/ipa_hostname.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/nova/lib/facter/libvirt_uuid.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_package_type.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_workers.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_service_default.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/archive/lib/facter/archive_windir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/systemd/lib/facter/systemd.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/apache/lib/facter/apache_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/collectd/lib/facter/python_dir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/collectd/lib/facter/collectd_version.rb", > "Debug: Facter: Found no suitable resolves of 1 for ec2_metadata", > "Debug: Facter: value for ec2_metadata is still nil", > "Debug: Failed to load library 'cfpropertylist' for feature 'cfpropertylist'", > "Debug: Executing: '/usr/bin/rpm --version'", > "Debug: Executing: '/usr/bin/rpm -ql rpm'", > "Debug: Facter: value for agent_specified_environment is still nil", > "Debug: Facter: value for cfkey is still nil", > "Debug: Facter: Found no suitable resolves of 1 for dhcp_servers", > "Debug: Facter: value for dhcp_servers is still nil", > "Debug: Facter: Found no suitable resolves of 1 for ec2_userdata", > "Debug: Facter: value for ec2_userdata is still nil", > "Debug: Facter: Found no suitable resolves of 1 for gce", > "Debug: Facter: value for gce is still nil", > "Debug: Facter: value for ipaddress6_br_ex is still nil", > "Debug: Facter: value for ipaddress_br_isolated is still nil", > "Debug: Facter: value for ipaddress6_br_isolated is still nil", > "Debug: Facter: value for netmask_br_isolated is still nil", > "Debug: Facter: value for ipaddress6_docker0 is still nil", > "Debug: Facter: value for ipaddress6_eth0 is still nil", > "Debug: Facter: value for ipaddress_eth1 is still nil", > "Debug: Facter: value for ipaddress6_eth1 is still nil", > "Debug: Facter: value for netmask_eth1 is still nil", > "Debug: Facter: value for ipaddress_eth2 is still nil", > "Debug: Facter: value for ipaddress6_eth2 is still nil", > "Debug: Facter: value for netmask_eth2 is still nil", > "Debug: Facter: value for ipaddress6_lo is still nil", > "Debug: Facter: value for macaddress_lo is still nil", > "Debug: Facter: value for ipaddress_ovs_system is still nil", > "Debug: Facter: value for ipaddress6_ovs_system is still nil", > "Debug: Facter: value for netmask_ovs_system is still nil", > "Debug: Facter: value for ipaddress6_vlan20 is still nil", > "Debug: Facter: value for ipaddress6_vlan30 is still nil", > "Debug: Facter: value for ipaddress6_vlan40 is still nil", > "Debug: Facter: value for ipaddress6_vlan50 is still nil", > "Debug: Facter: value for ipaddress6 is still nil", > "Debug: Facter: Found no suitable resolves of 2 for iphostnumber", > "Debug: Facter: value for iphostnumber is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistcodename", > "Debug: Facter: value for lsbdistcodename is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistdescription", > "Debug: Facter: value for lsbdistdescription is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistid", > "Debug: Facter: value for lsbdistid is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistrelease", > "Debug: Facter: value for lsbdistrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbmajdistrelease", > "Debug: Facter: value for lsbmajdistrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbminordistrelease", > "Debug: Facter: value for lsbminordistrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbrelease", > "Debug: Facter: value for lsbrelease is still nil", > "Debug: Facter: Found no suitable resolves of 2 for swapencrypted", > "Debug: Facter: value for swapencrypted is still nil", > "Debug: Facter: value for network_br_isolated is still nil", > "Debug: Facter: value for network_eth1 is still nil", > "Debug: Facter: value for network_eth2 is still nil", > "Debug: Facter: value for network_ovs_system is still nil", > "Debug: Facter: Found no suitable resolves of 1 for processor", > "Debug: Facter: value for processor is still nil", > "Debug: Facter: value for is_rsc is still nil", > "Debug: Facter: Found no suitable resolves of 1 for rsc_region", > "Debug: Facter: value for rsc_region is still nil", > "Debug: Facter: Found no suitable resolves of 1 for rsc_instance_id", > "Debug: Facter: value for rsc_instance_id is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_enforced", > "Debug: Facter: value for selinux_enforced is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_policyversion", > "Debug: Facter: value for selinux_policyversion is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_current_mode", > "Debug: Facter: value for selinux_current_mode is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_config_mode", > "Debug: Facter: value for selinux_config_mode is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_config_policy", > "Debug: Facter: value for selinux_config_policy is still nil", > "Debug: Facter: value for sshdsakey is still nil", > "Debug: Facter: value for sshfp_dsa is still nil", > "Debug: Facter: value for sshrsakey is still nil", > "Debug: Facter: value for sshfp_rsa is still nil", > "Debug: Facter: value for sshecdsakey is still nil", > "Debug: Facter: value for sshfp_ecdsa is still nil", > "Debug: Facter: value for sshed25519key is still nil", > "Debug: Facter: value for sshfp_ed25519 is still nil", > "Debug: Facter: Found no suitable resolves of 1 for system32", > "Debug: Facter: value for system32 is still nil", > "Debug: Facter: value for vlans is still nil", > "Debug: Facter: Found no suitable resolves of 1 for xendomains", > "Debug: Facter: value for xendomains is still nil", > "Debug: Facter: value for zfs_version is still nil", > "Debug: Facter: Found no suitable resolves of 1 for zonename", > "Debug: Facter: value for zonename is still nil", > "Debug: Facter: value for zpool_version is still nil", > "Debug: Facter: value for pe_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_major_version", > "Debug: Facter: value for pe_major_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_minor_version", > "Debug: Facter: value for pe_minor_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_patch_version", > "Debug: Facter: value for pe_patch_version is still nil", > "Debug: Puppet::Type::Service::ProviderNoop: false value when expecting true", > "Debug: Puppet::Type::Service::ProviderInit: false value when expecting true", > "Debug: Puppet::Type::Service::ProviderDaemontools: file /usr/bin/svc does not exist", > "Debug: Puppet::Type::Service::ProviderDebian: file /usr/sbin/update-rc.d does not exist", > "Debug: Puppet::Type::Service::ProviderGentoo: file /sbin/rc-update does not exist", > "Debug: Puppet::Type::Service::ProviderLaunchd: file /bin/launchctl does not exist", > "Debug: Puppet::Type::Service::ProviderOpenbsd: file /usr/sbin/rcctl does not exist", > "Debug: Puppet::Type::Service::ProviderOpenrc: file /bin/rc-status does not exist", > "Debug: Puppet::Type::Service::ProviderRedhat: file /sbin/service does not exist", > "Debug: Puppet::Type::Service::ProviderRunit: file /usr/bin/sv does not exist", > "Debug: Puppet::Type::Service::ProviderUpstart: 0 confines (of 4) were true", > "Debug: Puppet::Type::Package::ProviderTdagent: file /opt/td-agent/usr/sbin/td-agent-gem does not exist", > "Debug: Puppet::Type::Package::ProviderSensu_gem: file /opt/sensu/embedded/bin/gem does not exist", > "Debug: Puppet::Type::Package::ProviderAix: file /usr/bin/lslpp does not exist", > "Debug: Puppet::Type::Package::ProviderDpkg: file /usr/bin/dpkg does not exist", > "Debug: Puppet::Type::Package::ProviderApt: file /usr/bin/apt-get does not exist", > "Debug: Puppet::Type::Package::ProviderAptitude: file /usr/bin/aptitude does not exist", > "Debug: Puppet::Type::Package::ProviderAptrpm: file apt-get does not exist", > "Debug: Puppet::Type::Package::ProviderSun: file /usr/bin/pkginfo does not exist", > "Debug: Puppet::Type::Package::ProviderDnf: file dnf does not exist", > "Debug: Puppet::Type::Package::ProviderFink: file /sw/bin/fink does not exist", > "Debug: Puppet::Type::Package::ProviderOpenbsd: file pkg_info does not exist", > "Debug: Puppet::Type::Package::ProviderFreebsd: file /usr/sbin/pkg_info does not exist", > "Debug: Puppet::Type::Package::ProviderHpux: file /usr/sbin/swinstall does not exist", > "Debug: Puppet::Type::Package::ProviderNim: file /usr/sbin/nimclient does not exist", > "Debug: Puppet::Type::Package::ProviderOpkg: file opkg does not exist", > "Debug: Puppet::Type::Package::ProviderPacman: file /usr/bin/pacman does not exist", > "Debug: Puppet::Type::Package::ProviderPkg: file /usr/bin/pkg does not exist", > "Debug: Puppet::Type::Package::ProviderPkgin: file pkgin does not exist", > "Debug: Puppet::Type::Package::ProviderPkgng: file /usr/local/sbin/pkg does not exist", > "Debug: Puppet::Type::Package::ProviderPortage: file /usr/bin/emerge does not exist", > "Debug: Puppet::Type::Package::ProviderPorts: file /usr/local/sbin/portupgrade does not exist", > "Debug: Puppet::Type::Package::ProviderPortupgrade: file /usr/local/sbin/portupgrade does not exist", > "Debug: Puppet::Type::Package::ProviderPuppet_gem: file /opt/puppetlabs/puppet/bin/gem does not exist", > "Debug: Puppet::Type::Package::ProviderRug: file /usr/bin/rug does not exist", > "Debug: Puppet::Type::Package::ProviderSunfreeware: file pkg-get does not exist", > "Debug: Puppet::Type::Package::ProviderTdnf: file tdnf does not exist", > "Debug: Puppet::Type::Package::ProviderUp2date: file /usr/sbin/up2date-nox does not exist", > "Debug: Puppet::Type::Package::ProviderUrpmi: file urpmi does not exist", > "Debug: Puppet::Type::Package::ProviderZypper: file /usr/bin/zypper does not exist", > "Debug: Facter: value for redis_server_version is still nil", > "Debug: Facter: value for sssd_version is still nil", > "Debug: Facter: value for rabbitmq_version is still nil", > "Debug: Facter: value for rabbitmq_nodename is still nil", > "Debug: Facter: value for cassandrarelease is still nil", > "Debug: Facter: value for cassandraminorversion is still nil", > "Debug: Facter: value for cassandrapatchversion is still nil", > "Debug: Facter: value for cassandramajorversion is still nil", > "Debug: Facter: value for mysqld_version is still nil", > "Debug: Facter: value for mysql_version is still nil", > "Debug: Facter: value for java_version is still nil", > "Debug: Facter: value for java_major_version is still nil", > "Debug: Facter: value for java_patch_level is still nil", > "Debug: Facter: value for java_default_home is still nil", > "Debug: Facter: value for java_libjvm_path is still nil", > "Debug: Facter: value for ssh_client_version_full is still nil", > "Debug: Facter: Found no suitable resolves of 2 for ssh_client_version_major", > "Debug: Facter: value for ssh_client_version_major is still nil", > "Debug: Facter: Found no suitable resolves of 2 for ssh_client_version_release", > "Debug: Facter: value for ssh_client_version_release is still nil", > "Debug: Facter: value for ssh_server_version_full is still nil", > "Debug: Facter: Found no suitable resolves of 2 for ssh_server_version_major", > "Debug: Facter: value for ssh_server_version_major is still nil", > "Debug: Facter: Found no suitable resolves of 2 for ssh_server_version_release", > "Debug: Facter: value for ssh_server_version_release is still nil", > "Debug: Facter: Found no suitable resolves of 2 for staging_windir", > "Debug: Facter: value for staging_windir is still nil", > "Debug: Facter: value for sensu_version is still nil", > "Debug: Facter: value for ovs_version is still nil", > "Debug: Facter: value for ovs_uuid is still nil", > "Debug: Facter: Found no suitable resolves of 2 for iptables_persistent_version", > "Debug: Facter: value for iptables_persistent_version is still nil", > "Debug: Facter: value for git_exec_path is still nil", > "Debug: Facter: value for git_version is still nil", > "Debug: Facter: value for git_html_path is still nil", > "Debug: Facter: value for netmask6_ovs_system is still nil", > "Debug: Facter: value for nic_alias is still nil", > "Debug: Facter: value for docker_group_gid is still nil", > "Debug: Facter: value for ipa_hostname is still nil", > "Debug: Facter: value for libvirt_uuid is still nil", > "Debug: Facter: Found no suitable resolves of 2 for archive_windir", > "Debug: Facter: value for archive_windir is still nil", > "Debug: Facter: value for apache_version is still nil", > "Debug: Facter: value for collectd_version is still nil", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/pacemaker.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::pacemaker from tripleo/profile/base/pacemaker into production", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Debug: hiera(): Hiera JSON backend starting", > "Debug: hiera(): Looking up lookup_options in JSON backend", > "Debug: hiera(): Looking for data source docker", > "Debug: hiera(): Looking for data source heat_config_", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/heat_config_.json, skipping", > "Debug: hiera(): Looking for data source config_step", > "Debug: hiera(): Looking for data source controller_extraconfig", > "Debug: hiera(): Looking for data source extraconfig", > "Debug: hiera(): Looking for data source service_names", > "Debug: hiera(): Looking for data source service_configs", > "Debug: hiera(): Looking for data source controller", > "Debug: hiera(): Looking for data source bootstrap_node", > "Debug: hiera(): Looking for data source all_nodes", > "Debug: hiera(): Looking for data source vip_data", > "Debug: hiera(): Looking for data source net_ip_map", > "Debug: hiera(): Looking for data source RedHat", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/RedHat.json, skipping", > "Debug: hiera(): Looking for data source neutron_bigswitch_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/neutron_bigswitch_data.json, skipping", > "Debug: hiera(): Looking for data source neutron_cisco_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/neutron_cisco_data.json, skipping", > "Debug: hiera(): Looking for data source cisco_n1kv_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/cisco_n1kv_data.json, skipping", > "Debug: hiera(): Looking for data source midonet_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/midonet_data.json, skipping", > "Debug: hiera(): Looking for data source cisco_aci_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/cisco_aci_data.json, skipping", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_short_node_names in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_node_ips in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_authkey in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_reconnect_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_monitor_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_try_sleep in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::cluster_recheck_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::encryption in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::enable_instanceha in JSON backend", > "Debug: hiera(): Looking up step in JSON backend", > "Debug: hiera(): Looking up pcs_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_short_node_names in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_node_ips in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_reconnect_interval in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_monitor_interval in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker_cluster_recheck_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::instanceha in JSON backend", > "Debug: hiera(): Looking up hacluster_pwd in JSON backend", > "Debug: hiera(): Looking up pacemaker_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up enable_fencing in JSON backend", > "Debug: hiera(): Looking up pacemaker_short_node_names in JSON backend", > "Debug: hiera(): Looking up corosync_ipv6 in JSON backend", > "Debug: hiera(): Looking up corosync_token_timeout in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/init.pp' in environment production", > "Debug: Automatically imported pacemaker from pacemaker into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/params.pp' in environment production", > "Debug: Automatically imported pacemaker::params from pacemaker/params into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/install.pp' in environment production", > "Debug: Automatically imported pacemaker::install from pacemaker/install into production", > "Debug: hiera(): Looking up pacemaker::install::ensure in JSON backend", > "Debug: Resource package[pacemaker] was not determined to be defined", > "Debug: Create new resource package[pacemaker] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[pcs] was not determined to be defined", > "Debug: Create new resource package[pcs] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[fence-agents-all] was not determined to be defined", > "Debug: Create new resource package[fence-agents-all] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[pacemaker-libs] was not determined to be defined", > "Debug: Create new resource package[pacemaker-libs] with params {\"ensure\"=>\"present\"}", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/service.pp' in environment production", > "Debug: Automatically imported pacemaker::service from pacemaker/service into production", > "Debug: hiera(): Looking up pacemaker::service::ensure in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::hasstatus in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::hasrestart in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::enable in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/corosync.pp' in environment production", > "Debug: Automatically imported pacemaker::corosync from pacemaker/corosync into production", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_members_rrp in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_name in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_timeout in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::manage_fw in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_timeout in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::pcsd_debug in JSON backend", > "Debug: pcmk_nodes_added: []", > "Debug: template[inline]: Bound template variables for inline template in 0.00 seconds", > "Debug: template[inline]: Interpolated template inline template in 0.00 seconds", > "Debug: hiera(): Looking up docker_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/systemd/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/systemd/manifests/systemctl/daemon_reload.pp' in environment production", > "Debug: Automatically imported systemd::systemctl::daemon_reload from systemd/systemctl/daemon_reload into production", > "Debug: importing '/etc/puppet/modules/systemd/manifests/unit_file.pp' in environment production", > "Debug: importing '/etc/puppet/modules/stdlib/manifests/init.pp' in environment production", > "Debug: Automatically imported systemd::unit_file from systemd/unit_file into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/stonith.pp' in environment production", > "Debug: Automatically imported pacemaker::stonith from pacemaker/stonith into production", > "Debug: hiera(): Looking up pacemaker::stonith::try_sleep in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/property.pp' in environment production", > "Debug: Automatically imported pacemaker::property from pacemaker/property into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/resource_defaults.pp' in environment production", > "Debug: Automatically imported pacemaker::resource_defaults from pacemaker/resource_defaults into production", > "Debug: hiera(): Looking up pacemaker::resource_defaults::defaults in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::post_success_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::verify_on_create in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::ensure in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/pacemaker/rabbitmq_bundle.pp' in environment production", > "Debug: Automatically imported tripleo::profile::pacemaker::rabbitmq_bundle from tripleo/profile/pacemaker/rabbitmq_bundle into production", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::rabbitmq_docker_image in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::rabbitmq_docker_control_port in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::erlang_cookie in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::user_ha_queues in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::rpc_scheme in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::rpc_bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::rpc_nodes in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::notify_scheme in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::notify_bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::notify_nodes in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::control_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::erlang_cookie in JSON backend", > "Debug: hiera(): Looking up rabbitmq::nr_ha_queues in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_scheme in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_node_names in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_scheme in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_node_names in JSON backend", > "Debug: hiera(): Looking up enable_internal_tls in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/rabbitmq.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::rabbitmq from tripleo/profile/base/rabbitmq into production", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::certificate_specs in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::config_variables in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::environment in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::ssl_versions in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::inter_node_ciphers in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::inet_dist_interface in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::ipv6 in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::kernel_variables in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::rpc_scheme in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::rpc_nodes in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::rpc_bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::notify_scheme in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::notify_nodes in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::notify_bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::rabbitmq_pass in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::rabbitmq_user in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::stack_action in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::step in JSON backend", > "Debug: hiera(): Looking up rabbitmq_config_variables in JSON backend", > "Debug: hiera(): Looking up rabbitmq_environment in JSON backend", > "Debug: hiera(): Looking up rabbitmq::interface in JSON backend", > "Debug: hiera(): Looking up internal_api in JSON backend", > "Debug: hiera(): Looking up rabbit_ipv6 in JSON backend", > "Debug: hiera(): Looking up rabbitmq_kernel_variables in JSON backend", > "Debug: hiera(): Looking up rabbitmq::default_pass in JSON backend", > "Debug: hiera(): Looking up rabbitmq::default_user in JSON backend", > "Debug: hiera(): Looking up stack_action in JSON backend", > "Debug: hiera(): Looking up rabbitmq::service_manage in JSON backend", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/init.pp' in environment production", > "Debug: Automatically imported rabbitmq from rabbitmq into production", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/params.pp' in environment production", > "Debug: Automatically imported rabbitmq::params from rabbitmq/params into production", > "Debug: hiera(): Looking up rabbitmq::admin_enable in JSON backend", > "Debug: hiera(): Looking up rabbitmq::cluster_node_type in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_path in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_ranch in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_stomp in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_shovel in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_shovel_statics in JSON backend", > "Debug: hiera(): Looking up rabbitmq::delete_guest_user in JSON backend", > "Debug: hiera(): Looking up rabbitmq::env_config in JSON backend", > "Debug: hiera(): Looking up rabbitmq::env_config_path in JSON backend", > "Debug: hiera(): Looking up rabbitmq::management_ip_address in JSON backend", > "Debug: hiera(): Looking up rabbitmq::management_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::management_ssl in JSON backend", > "Debug: hiera(): Looking up rabbitmq::management_hostname in JSON backend", > "Debug: hiera(): Looking up rabbitmq::node_ip_address in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_apt_pin in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_ensure in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_gpg_key in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_name in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_source in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_provider in JSON backend", > "Debug: hiera(): Looking up rabbitmq::repos_ensure in JSON backend", > "Debug: hiera(): Looking up rabbitmq::manage_python in JSON backend", > "Debug: hiera(): Looking up rabbitmq::rabbitmq_user in JSON backend", > "Debug: hiera(): Looking up rabbitmq::rabbitmq_group in JSON backend", > "Debug: hiera(): Looking up rabbitmq::rabbitmq_home in JSON backend", > "Debug: hiera(): Looking up rabbitmq::port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::tcp_keepalive in JSON backend", > "Debug: hiera(): Looking up rabbitmq::tcp_backlog in JSON backend", > "Debug: hiera(): Looking up rabbitmq::tcp_sndbuf in JSON backend", > "Debug: hiera(): Looking up rabbitmq::tcp_recbuf in JSON backend", > "Debug: hiera(): Looking up rabbitmq::heartbeat in JSON backend", > "Debug: hiera(): Looking up rabbitmq::service_ensure in JSON backend", > "Debug: hiera(): Looking up rabbitmq::service_name in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_only in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_cacert in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_cert in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_key in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_depth in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_cert_password in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_interface in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_management_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_stomp_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_verify in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_fail_if_no_peer_cert in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_management_verify in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_management_fail_if_no_peer_cert in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_versions in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_secure_renegotiate in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_reuse_sessions in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_honor_cipher_order in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_dhfile in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_ciphers in JSON backend", > "Debug: hiera(): Looking up rabbitmq::stomp_ensure in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_auth in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_server in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_user_dn_pattern in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_other_bind in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_use_ssl in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_log in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_config_variables in JSON backend", > "Debug: hiera(): Looking up rabbitmq::stomp_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::stomp_ssl_only in JSON backend", > "Debug: hiera(): Looking up rabbitmq::wipe_db_on_cookie_change in JSON backend", > "Debug: hiera(): Looking up rabbitmq::cluster_partition_handling in JSON backend", > "Debug: hiera(): Looking up rabbitmq::file_limit in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_management_variables in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_additional_variables in JSON backend", > "Debug: hiera(): Looking up rabbitmq::auth_backends in JSON backend", > "Debug: hiera(): Looking up rabbitmq::key_content in JSON backend", > "Debug: hiera(): Looking up rabbitmq::collect_statistics_interval in JSON backend", > "Debug: hiera(): Looking up rabbitmq::inetrc_config in JSON backend", > "Debug: hiera(): Looking up rabbitmq::inetrc_config_path in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_erl_dist in JSON backend", > "Debug: hiera(): Looking up rabbitmq::rabbitmqadmin_package in JSON backend", > "Debug: hiera(): Looking up rabbitmq::archive_options in JSON backend", > "Debug: hiera(): Looking up rabbitmq::loopback_users in JSON backend", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/install.pp' in environment production", > "Debug: Automatically imported rabbitmq::install from rabbitmq/install into production", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/config.pp' in environment production", > "Debug: Automatically imported rabbitmq::config from rabbitmq/config into production", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/rabbitmq.config.erb", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq.config.erb]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/rabbitmq.config.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq.config.erb]: Interpolated template /etc/puppet/modules/rabbitmq/templates/rabbitmq.config.erb in 0.00 seconds", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/rabbitmq-env.conf.erb", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq-env.conf.erb]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/rabbitmq-env.conf.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq-env.conf.erb]: Interpolated template /etc/puppet/modules/rabbitmq/templates/rabbitmq-env.conf.erb in 0.00 seconds", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/inetrc.erb", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/inetrc.erb]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/inetrc.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/inetrc.erb]: Interpolated template /etc/puppet/modules/rabbitmq/templates/inetrc.erb in 0.00 seconds", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/rabbitmqadmin.conf.erb", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmqadmin.conf.erb]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/rabbitmqadmin.conf.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmqadmin.conf.erb]: Interpolated template /etc/puppet/modules/rabbitmq/templates/rabbitmqadmin.conf.erb in 0.00 seconds", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/rabbitmq-server.service.d/limits.conf", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq-server.service.d/limits.conf]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/rabbitmq-server.service.d/limits.conf in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq-server.service.d/limits.conf]: Interpolated template /etc/puppet/modules/rabbitmq/templates/rabbitmq-server.service.d/limits.conf in 0.00 seconds", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/limits.conf", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/limits.conf]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/limits.conf in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/limits.conf]: Interpolated template /etc/puppet/modules/rabbitmq/templates/limits.conf in 0.00 seconds", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/service.pp' in environment production", > "Debug: Automatically imported rabbitmq::service from rabbitmq/service into production", > "Debug: hiera(): Looking up rabbitmq::service::service_ensure in JSON backend", > "Debug: hiera(): Looking up rabbitmq::service::service_manage in JSON backend", > "Debug: hiera(): Looking up rabbitmq::service::service_name in JSON backend", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/management.pp' in environment production", > "Debug: Automatically imported rabbitmq::management from rabbitmq/management into production", > "Debug: hiera(): Looking up veritas_hyperscale_controller_enabled in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_short_node_names in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/resource/bundle.pp' in environment production", > "Debug: Automatically imported pacemaker::resource::bundle from pacemaker/resource/bundle into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/resource/ocf.pp' in environment production", > "Debug: Automatically imported pacemaker::resource::ocf from pacemaker/resource/ocf into production", > "Debug: hiera(): Looking up systemd::service_limits in JSON backend", > "Debug: hiera(): Looking up systemd::manage_resolved in JSON backend", > "Debug: hiera(): Looking up systemd::resolved_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::manage_networkd in JSON backend", > "Debug: hiera(): Looking up systemd::networkd_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::manage_timesyncd in JSON backend", > "Debug: hiera(): Looking up systemd::timesyncd_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::ntp_server in JSON backend", > "Debug: hiera(): Looking up systemd::fallback_ntp_server in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource::bundle::deep_compare in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource::bundle::update_settle_secs in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource::ocf::deep_compare in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource::ocf::update_settle_secs in JSON backend", > "Debug: Adding relationship from Service[pcsd] to Exec[auth-successful-across-all-nodes] with 'before'", > "Debug: Adding relationship from Exec[reauthenticate-across-all-nodes] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Exec[auth-successful-across-all-nodes] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Exec[reauthenticate-across-all-nodes] to Exec[Create Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[auth-successful-across-all-nodes] to Exec[Create Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[Create Cluster tripleo_cluster] to Exec[Start Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[Start Cluster tripleo_cluster] to Service[corosync] with 'before'", > "Debug: Adding relationship from Exec[Start Cluster tripleo_cluster] to Service[pacemaker] with 'before'", > "Debug: Adding relationship from Service[corosync] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Service[pacemaker] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from File[etc-pacemaker] to File[etc-pacemaker-authkey] with 'before'", > "Debug: Adding relationship from Exec[auth-successful-across-all-nodes] to File[etc-pacemaker-authkey] with 'before'", > "Debug: Adding relationship from Exec[Create Cluster tripleo_cluster] to File[etc-pacemaker-authkey] with 'before'", > "Debug: Adding relationship from File[etc-pacemaker-authkey] to Exec[Start Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[rabbitmq] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property--stonith-enabled] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property-controller-0-rabbitmq-role] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_bundle[rabbitmq-bundle] with 'before'", > "Debug: Adding relationship from Class[Pacemaker] to Class[Pacemaker::Corosync] with 'before'", > "Debug: Adding relationship from File[/etc/systemd/system/resource-agents-deps.target.wants] to Systemd::Unit_file[docker.service] with 'before'", > "Debug: Adding relationship from Systemd::Unit_file[docker.service] to Class[Systemd::Systemctl::Daemon_reload] with 'notify'", > "Debug: Adding relationship from File[/etc/systemd/system/rabbitmq-server.service.d] to File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf] with 'before'", > "Debug: Adding relationship from Class[Rabbitmq::Install] to Class[Rabbitmq::Config] with 'before'", > "Debug: Adding relationship from Class[Rabbitmq::Config] to Class[Rabbitmq::Service] with 'notify'", > "Debug: Adding relationship from Class[Rabbitmq::Service] to Class[Rabbitmq::Management] with 'before'", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.73 seconds", > "Debug: puppet-pacemaker: initialize()", > "Debug: Creating default schedules", > "Info: Applying configuration version '1537533079'", > "Debug: /Stage[main]/Pacemaker/before: subscribes to Class[Pacemaker::Corosync]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/before: subscribes to Exec[auth-successful-across-all-nodes]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/before: subscribes to Service[pcsd]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/notify: subscribes to Service[pcsd]", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]/notify: subscribes to Exec[reauthenticate-across-all-nodes]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/before: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/require: subscribes to User[hacluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/before: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/before: subscribes to File[etc-pacemaker-authkey]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/before: subscribes to Exec[Start Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/before: subscribes to File[etc-pacemaker-authkey]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/require: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/before: subscribes to Service[corosync]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/before: subscribes to Service[pacemaker]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/before: subscribes to File[etc-pacemaker-authkey]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/before: subscribes to Exec[Start Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[rabbitmq]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property--stonith-enabled]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property-controller-0-rabbitmq-role]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_bundle[rabbitmq-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/before: subscribes to Systemd::Unit_file[docker.service]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/before: subscribes to Class[Pacemaker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/notify: subscribes to Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Rabbitmq::Install/before: subscribes to Class[Rabbitmq::Config]", > "Debug: /Stage[main]/Rabbitmq::Install/Package[rabbitmq-server]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmqadmin.conf]/require: subscribes to File[/etc/rabbitmq]", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d]/before: subscribes to File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]/notify: subscribes to Exec[rabbitmq-systemd-reload]", > "Debug: /Stage[main]/Rabbitmq::Config/Exec[rabbitmq-systemd-reload]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/Rabbitmq_erlang_cookie[/var/lib/rabbitmq/.erlang.cookie]/before: subscribes to File[rabbitmq.config]", > "Debug: /Stage[main]/Rabbitmq::Config/Rabbitmq_erlang_cookie[/var/lib/rabbitmq/.erlang.cookie]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Service/before: subscribes to Class[Rabbitmq::Management]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/File[/var/lib/rabbitmq/.erlang.cookie]/require: subscribes to Class[Rabbitmq]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Property[rabbitmq-role-controller-0]/before: subscribes to Pacemaker::Resource::Bundle[rabbitmq-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Ocf[rabbitmq]/require: subscribes to Class[Rabbitmq]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Ocf[rabbitmq]/require: subscribes to Pacemaker::Resource::Bundle[rabbitmq-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Ocf[rabbitmq]/before: subscribes to Exec[rabbitmq-ready]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/notify: subscribes to Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]: Adding autorequire relationship with User[hacluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]: Adding autorequire relationship with User[hacluster]", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]: Adding autorequire relationship with File[/etc/rabbitmq]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]: Adding autorequire relationship with File[/etc/rabbitmq]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]: Adding autorequire relationship with File[/etc/rabbitmq]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]: Adding autorequire relationship with File[/etc/rabbitmq]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: Adding autorequire relationship with File[/etc/systemd/system/resource-agents-deps.target.wants]", > "Debug: Stage[main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Stage[main]: Resource is being skipped, unscheduling all events", > "Debug: Class[Settings]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Settings]: Resource is being skipped, unscheduling all events", > "Debug: Class[Main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Main]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Install]: Resource is being skipped, unscheduling all events", > "Debug: Prefetching yum resources for package", > "Debug: Executing '/usr/bin/rpm -qa --nosignature --nodigest --qf '%{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n''", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Service]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]: The container Class[Tripleo::Profile::Base::Pacemaker] will propagate my refresh event", > "Debug: Systemd::Unit_file[docker.service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Systemd::Unit_file[docker.service]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Stonith]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Stonith]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[Disable STONITH]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Pacemaker::Property[Disable STONITH]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Resource_defaults]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Resource_defaults]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Base::Rabbitmq]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Tripleo::Profile::Base::Rabbitmq]: Resource is being skipped, unscheduling all events", > "Debug: Class[Rabbitmq::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Rabbitmq]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq]: Resource is being skipped, unscheduling all events", > "Debug: Class[Rabbitmq::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq::Install]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Rabbitmq::Install/Package[rabbitmq-server]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Rabbitmq::Install/Package[rabbitmq-server]: Resource is being skipped, unscheduling all events", > "Debug: Class[Rabbitmq::Config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq::Config]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/group: group changed 'rabbitmq' to 'root'", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]/ensure: created", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]/ensure: defined content as '{md5}d2eee02b74d42601e57574435e10a026'", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Info: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]: Scheduling refresh of Class[Rabbitmq::Service]", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]/ensure: defined content as '{md5}12f8d1a1f9f57f23c1be6c7bf2286e73'", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Info: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]: Scheduling refresh of Class[Rabbitmq::Service]", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmqadmin.conf]/ensure: defined content as '{md5}44d4ef5cb86ab30e6127e83939ef09c4'", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmqadmin.conf]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d]/ensure: created", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]/ensure: defined content as '{md5}91d370d2c5a1af171c9d5b5985fca733'", > "Info: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]: Scheduling refresh of Exec[rabbitmq-systemd-reload]", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Debug: /Stage[main]/Rabbitmq::Config/Exec[rabbitmq-systemd-reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Rabbitmq::Config/Exec[rabbitmq-systemd-reload]: Resource is being skipped, unscheduling all events", > "Info: /Stage[main]/Rabbitmq::Config/Exec[rabbitmq-systemd-reload]: Unscheduling all events on Exec[rabbitmq-systemd-reload]", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]/ensure: defined content as '{md5}1030abc4db405b5f2969643e99bc7435'", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Info: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]: Scheduling refresh of Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/Rabbitmq_erlang_cookie[/var/lib/rabbitmq/.erlang.cookie]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Rabbitmq::Config/Rabbitmq_erlang_cookie[/var/lib/rabbitmq/.erlang.cookie]: Resource is being skipped, unscheduling all events", > "Info: Computing checksum on file /etc/rabbitmq/rabbitmq.config", > "Info: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]: Filebucketed /etc/rabbitmq/rabbitmq.config to puppet with sum b346ec0a8320f85f795bf612f6b02da7", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/content: content changed '{md5}b346ec0a8320f85f795bf612f6b02da7' to '{md5}35367bd5f007dc7f00cc3ce2285bbe67'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/mode: mode changed '0644' to '0640'", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Info: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]: Scheduling refresh of Class[Rabbitmq::Service]", > "Info: Class[Rabbitmq::Config]: Unscheduling all events on Class[Rabbitmq::Config]", > "Debug: Class[Rabbitmq::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq::Service]: Resource is being skipped, unscheduling all events", > "Info: Class[Rabbitmq::Service]: Unscheduling all events on Class[Rabbitmq::Service]", > "Debug: Class[Rabbitmq::Management]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq::Management]: Resource is being skipped, unscheduling all events", > "Info: Computing checksum on file /var/lib/rabbitmq/.erlang.cookie", > "Info: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/File[/var/lib/rabbitmq/.erlang.cookie]: Filebucketed /var/lib/rabbitmq/.erlang.cookie to puppet with sum d316cb7238280edea9880e9fc4fa179c", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/File[/var/lib/rabbitmq/.erlang.cookie]/content: content changed '{md5}d316cb7238280edea9880e9fc4fa179c' to '{md5}ae2ac7298a94a6048a768f61d6668bab'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/File[/var/lib/rabbitmq/.erlang.cookie]: The container Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle] will propagate my refresh event", > "Debug: Pacemaker::Property[rabbitmq-role-controller-0]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Pacemaker::Property[rabbitmq-role-controller-0]: Resource is being skipped, unscheduling all events", > "Debug: Class[Systemd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Systemd]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/mode: Not managing symlink mode", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/ensure: created", > "Info: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: Scheduling refresh of Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: The container Systemd::Unit_file[docker.service] will propagate my refresh event", > "Info: Systemd::Unit_file[docker.service]: Unscheduling all events on Systemd::Unit_file[docker.service]", > "Info: Class[Tripleo::Profile::Base::Pacemaker]: Unscheduling all events on Class[Tripleo::Profile::Base::Pacemaker]", > "Debug: Class[Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Corosync]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/ensure: created", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/ensure: defined content as '{md5}c3147d7557e35ca11703708df4a6bdfa'", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Resource is being skipped, unscheduling all events", > "Info: Class[Pacemaker::Corosync]: Unscheduling all events on Class[Pacemaker::Corosync]", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Resource is being skipped, unscheduling all events", > "Info: Class[Systemd::Systemctl::Daemon_reload]: Unscheduling all events on Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-ra2onh returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-ra2onh property show | grep stonith-enabled | grep false > /dev/null 2>&1", > "Debug: property exists: property show | grep stonith-enabled | grep false > /dev/null 2>&1 -> ", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1bxfgxm returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1bxfgxm property show | grep rabbitmq-role | grep controller-0 | grep true > /dev/null 2>&1", > "Debug: property exists: property show | grep rabbitmq-role | grep controller-0 | grep true > /dev/null 2>&1 -> false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-e8evcj returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-e8evcj property set --node controller-0 rabbitmq-role=true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-e8evcj diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-e8evcj.orig returned 0 -> CIB updated", > "Debug: property create: property set --node controller-0 rabbitmq-role=true -> ", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Property[rabbitmq-role-controller-0]/Pcmk_property[property-controller-0-rabbitmq-role]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Property[rabbitmq-role-controller-0]/Pcmk_property[property-controller-0-rabbitmq-role]: The container Pacemaker::Property[rabbitmq-role-controller-0] will propagate my refresh event", > "Info: Pacemaker::Property[rabbitmq-role-controller-0]: Unscheduling all events on Pacemaker::Property[rabbitmq-role-controller-0]", > "Debug: Pacemaker::Resource::Bundle[rabbitmq-bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Pacemaker::Resource::Bundle[rabbitmq-bundle]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1f6815i returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1f6815i constraint list | grep location-rabbitmq-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-hqkmp7 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-hqkmp7 resource show rabbitmq-bundle > /dev/null 2>&1", > "Debug: Exists: bundle rabbitmq-bundle exists 1 location exists 1 deep_compare: true", > "Debug: Create: resource exists 1 location exists 1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1kpxkt4 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1kpxkt4 resource bundle create rabbitmq-bundle container docker image=192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest replicas=1 options=\"--user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" run-command=\"/bin/bash /usr/local/bin/kolla_start\" network=host storage-map id=rabbitmq-cfg-files source-dir=/var/lib/kolla/config_files/rabbitmq.json target-dir=/var/lib/kolla/config_files/config.json options=ro storage-map id=rabbitmq-cfg-data source-dir=/var/lib/config-data/puppet-generated/rabbitmq/ target-dir=/var/lib/kolla/config_files/src options=ro storage-map id=rabbitmq-hosts source-dir=/etc/hosts target-dir=/etc/hosts options=ro storage-map id=rabbitmq-localtime source-dir=/etc/localtime target-dir=/etc/localtime options=ro storage-map id=rabbitmq-lib source-dir=/var/lib/rabbitmq target-dir=/var/lib/rabbitmq options=rw storage-map id=rabbitmq-pki-extracted source-dir=/etc/pki/ca-trust/extracted target-dir=/etc/pki/ca-trust/extracted options=ro storage-map id=rabbitmq-pki-ca-bundle-crt source-dir=/etc/pki/tls/certs/ca-bundle.crt target-dir=/etc/pki/tls/certs/ca-bundle.crt options=ro storage-map id=rabbitmq-pki-ca-bundle-trust-crt source-dir=/etc/pki/tls/certs/ca-bundle.trust.crt target-dir=/etc/pki/tls/certs/ca-bundle.trust.crt options=ro storage-map id=rabbitmq-pki-cert source-dir=/etc/pki/tls/cert.pem target-dir=/etc/pki/tls/cert.pem options=ro storage-map id=rabbitmq-log source-dir=/var/log/containers/rabbitmq target-dir=/var/log/rabbitmq options=rw storage-map id=rabbitmq-dev-log source-dir=/dev/log target-dir=/dev/log options=rw network control-port=3122 --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1kpxkt4 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1kpxkt4.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location rabbitmq-bundle rule resource-discovery=exclusive score=0 rabbitmq-role eq true", > "Debug: location_rule_create: constraint location rabbitmq-bundle rule resource-discovery=exclusive score=0 rabbitmq-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1h8ea7 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1h8ea7 constraint location rabbitmq-bundle rule resource-discovery=exclusive score=0 rabbitmq-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1h8ea7 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1h8ea7.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-bcnh75 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-bcnh75 resource enable rabbitmq-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-bcnh75 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-bcnh75.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Bundle[rabbitmq-bundle]/Pcmk_bundle[rabbitmq-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Bundle[rabbitmq-bundle]/Pcmk_bundle[rabbitmq-bundle]: The container Pacemaker::Resource::Bundle[rabbitmq-bundle] will propagate my refresh event", > "Info: Pacemaker::Resource::Bundle[rabbitmq-bundle]: Unscheduling all events on Pacemaker::Resource::Bundle[rabbitmq-bundle]", > "Debug: Pacemaker::Resource::Ocf[rabbitmq]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Pacemaker::Resource::Ocf[rabbitmq]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1v6zvgl returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1v6zvgl constraint list | grep location-rabbitmq-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-nc9tms returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-nc9tms resource show rabbitmq > /dev/null 2>&1", > "Debug: Exists: resource rabbitmq exists 1 location exists 0 resource deep_compare: true", > "Debug: Create: resource exists 1 location exists 0", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-11o29r8 returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-11o29r8 resource create rabbitmq ocf:heartbeat:rabbitmq-cluster set_policy='ha-all ^(?!amq\\.).* {\"ha-mode\":\"exactly\",\"ha-params\":1}' meta notify=true container-attribute-target=host op start timeout=200s stop timeout=200s bundle rabbitmq-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-11o29r8 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-11o29r8.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Ocf[rabbitmq]/Pcmk_resource[rabbitmq]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Ocf[rabbitmq]/Pcmk_resource[rabbitmq]: The container Pacemaker::Resource::Ocf[rabbitmq] will propagate my refresh event", > "Info: Pacemaker::Resource::Ocf[rabbitmq]: Unscheduling all events on Pacemaker::Resource::Ocf[rabbitmq]", > "Debug: Exec[rabbitmq-ready](provider=posix): Executing check 'rabbitmqctl status | grep -F \"{rabbit,\"'", > "Debug: Executing: 'rabbitmqctl status | grep -F \"{rabbit,\"'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: Error: Failed to initialize erlang distribution: {{shutdown,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: {failed_to_start_child,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: net_kernel,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: {'EXIT',nodistribution}}},", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: {child,undefined,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: net_sup_dynamic,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: {erl_distribution,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: start_link,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: [['rabbitmq-cli-48',", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: shortnames]]},", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: permanent,1000,supervisor,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: [erl_distribution]}}.", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/returns: Exec try 1/180", > "Debug: Exec[rabbitmq-ready](provider=posix): Executing 'rabbitmqctl status | grep -F \"{rabbit,\"'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/returns: Sleeping for 10 seconds between tries", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/returns: Exec try 2/180", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/returns: Exec try 3/180", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/returns: executed successfully", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]: The container Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle] will propagate my refresh event", > "Info: Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle]: Unscheduling all events on Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle]", > "Debug: /Schedule[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[puppet]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[hourly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[hourly]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[daily]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[daily]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[weekly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[weekly]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[monthly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[monthly]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[never]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[never]: Resource is being skipped, unscheduling all events", > "Debug: /Filebucket[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Filebucket[puppet]: Resource is being skipped, unscheduling all events", > "Debug: Finishing transaction 27614660", > "Debug: Storing state", > "Info: Creating state file /var/lib/puppet/state/state.yaml", > "Debug: Stored state in 0.00 seconds", > "Notice: Applied catalog in 65.86 seconds", > "Changes:", > " Total: 21", > "Events:", > " Success: 21", > "Resources:", > " Changed: 18", > " Out of sync: 18", > " Skipped: 25", > " Total: 45", > "Time:", > " File line: 0.00", > " File: 0.05", > " Config retrieval: 1.87", > " Pcmk resource: 10.89", > " Last run: 1537533147", > " Pcmk bundle: 19.69", > " Exec: 25.60", > " Total: 67.51", > " Pcmk property: 9.41", > "Version:", > " Config: 1537533079", > " Puppet: 4.8.2", > "Debug: Applying settings catalog for sections main, reporting, metrics", > "Debug: Using settings: adding file resource 'confdir': 'File[/etc/puppet]{:path=>\"/etc/puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'vardir': 'File[/var/lib/puppet]{:path=>\"/var/lib/puppet\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'logdir': 'File[/var/log/puppet]{:path=>\"/var/log/puppet\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'statedir': 'File[/var/lib/puppet/state]{:path=>\"/var/lib/puppet/state\", :mode=>\"1755\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'rundir': 'File[/var/run/puppet]{:path=>\"/var/run/puppet\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'libdir': 'File[/var/lib/puppet/lib]{:path=>\"/var/lib/puppet/lib\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'hiera_config': 'File[/etc/puppet/hiera.yaml]{:path=>\"/etc/puppet/hiera.yaml\", :ensure=>:file, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'preview_outputdir': 'File[/var/lib/puppet/preview]{:path=>\"/var/lib/puppet/preview\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'certdir': 'File[/etc/puppet/ssl/certs]{:path=>\"/etc/puppet/ssl/certs\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'ssldir': 'File[/etc/puppet/ssl]{:path=>\"/etc/puppet/ssl\", :mode=>\"771\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'publickeydir': 'File[/etc/puppet/ssl/public_keys]{:path=>\"/etc/puppet/ssl/public_keys\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'requestdir': 'File[/etc/puppet/ssl/certificate_requests]{:path=>\"/etc/puppet/ssl/certificate_requests\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'privatekeydir': 'File[/etc/puppet/ssl/private_keys]{:path=>\"/etc/puppet/ssl/private_keys\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'privatedir': 'File[/etc/puppet/ssl/private]{:path=>\"/etc/puppet/ssl/private\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'pluginfactdest': 'File[/var/lib/puppet/facts.d]{:path=>\"/var/lib/puppet/facts.d\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: /File[/var/lib/puppet/state]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: /File[/var/lib/puppet/lib]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: /File[/etc/puppet/hiera.yaml]: Adding autorequire relationship with File[/etc/puppet]", > "Debug: /File[/var/lib/puppet/preview]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: /File[/etc/puppet/ssl/certs]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl]: Adding autorequire relationship with File[/etc/puppet]", > "Debug: /File[/etc/puppet/ssl/public_keys]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl/certificate_requests]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl/private_keys]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl/private]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/var/lib/puppet/facts.d]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: Finishing transaction 47624760", > "Debug: Received report to process from controller-0.localdomain", > "Debug: Processing report from controller-0.localdomain with processor Puppet::Reports::Store", > "stderr: + STEP=2", > "+ TAGS=file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", > "+ CONFIG='include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle'", > "+ EXTRA_ARGS=--debug", > "+ '[' -d /tmp/puppet-etc ']'", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ echo '{\"step\": 2}'", > "+ export FACTER_uuid=docker", > "+ FACTER_uuid=docker", > "+ set +e", > "+ puppet apply --debug --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready -e 'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle'", > "Warning: Facter: Could not retrieve fact='rabbitmq_nodename', resolution='<anonymous>': undefined method `[]' for nil:NilClass", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "Warning: ModuleLoader: module 'rabbitmq' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rc=2", > "+ set -e", > "+ set +ux", > "Debug: Facter: value for erl_ssl_path is still nil", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/pacemaker/database/mysql_bundle.pp' in environment production", > "Debug: Automatically imported tripleo::profile::pacemaker::database::mysql_bundle from tripleo/profile/pacemaker/database/mysql_bundle into production", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::mysql_docker_image in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::control_port in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::bind_address in JSON backend", > "Debug: hiera(): Looking up fqdn_internal_api in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::ca_file in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::cipher_list in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::gcomm_cipher in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::certificate_specs in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::gmcast_listen_addr in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::innodb_flush_log_at_trx_commit in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::sst_tls_cipher in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::sst_tls_options in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::ipv6 in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::step in JSON backend", > "Debug: hiera(): Looking up mysql_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::certificate_specs in JSON backend", > "Debug: hiera(): Looking up mysql_bind_host in JSON backend", > "Debug: hiera(): Looking up innodb_flush_log_at_trx_commit in JSON backend", > "Debug: hiera(): Looking up mysql_ipv6 in JSON backend", > "Debug: hiera(): Looking up mysql_short_node_names in JSON backend", > "Debug: hiera(): Looking up mysql_node_names in JSON backend", > "Debug: hiera(): Looking up mysql_max_connections in JSON backend", > "Debug: hiera(): Looking up mysql::server::root_password in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::database::mysql from tripleo/profile/base/database/mysql into production", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::bind_address in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::generate_dropin_file_limit in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::innodb_buffer_pool_size in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::mysql_max_connections in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::step in JSON backend", > "Debug: hiera(): Looking up innodb_buffer_pool_size in JSON backend", > "Debug: hiera(): Looking up enable_galera in JSON backend", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server.pp' in environment production", > "Debug: Automatically imported mysql::server from mysql/server into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/params.pp' in environment production", > "Debug: Automatically imported mysql::params from mysql/params into production", > "Debug: hiera(): Looking up mysql::server::includedir in JSON backend", > "Debug: hiera(): Looking up mysql::server::install_options in JSON backend", > "Debug: hiera(): Looking up mysql::server::install_secret_file in JSON backend", > "Debug: hiera(): Looking up mysql::server::manage_config_file in JSON backend", > "Debug: hiera(): Looking up mysql::server::package_ensure in JSON backend", > "Debug: hiera(): Looking up mysql::server::package_manage in JSON backend", > "Debug: hiera(): Looking up mysql::server::package_name in JSON backend", > "Debug: hiera(): Looking up mysql::server::purge_conf_dir in JSON backend", > "Debug: hiera(): Looking up mysql::server::restart in JSON backend", > "Debug: hiera(): Looking up mysql::server::root_group in JSON backend", > "Debug: hiera(): Looking up mysql::server::mysql_group in JSON backend", > "Debug: hiera(): Looking up mysql::server::service_name in JSON backend", > "Debug: hiera(): Looking up mysql::server::service_provider in JSON backend", > "Debug: hiera(): Looking up mysql::server::users in JSON backend", > "Debug: hiera(): Looking up mysql::server::grants in JSON backend", > "Debug: hiera(): Looking up mysql::server::databases in JSON backend", > "Debug: hiera(): Looking up mysql::server::enabled in JSON backend", > "Debug: hiera(): Looking up mysql::server::manage_service in JSON backend", > "Debug: hiera(): Looking up mysql::server::old_root_password in JSON backend", > "Debug: importing '/etc/puppet/modules/mysql/manifests/db.pp' in environment production", > "Debug: Automatically imported mysql::db from mysql/db into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/config.pp' in environment production", > "Debug: Automatically imported mysql::server::config from mysql/server/config into production", > "Debug: Scope(Class[Mysql::Server::Config]): Retrieving template mysql/my.cnf.erb", > "Debug: template[/etc/puppet/modules/mysql/templates/my.cnf.erb]: Bound template variables for /etc/puppet/modules/mysql/templates/my.cnf.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/mysql/templates/my.cnf.erb]: Interpolated template /etc/puppet/modules/mysql/templates/my.cnf.erb in 0.00 seconds", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/install.pp' in environment production", > "Debug: Automatically imported mysql::server::install from mysql/server/install into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/binarylog.pp' in environment production", > "Debug: Automatically imported mysql::server::binarylog from mysql/server/binarylog into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/installdb.pp' in environment production", > "Debug: Automatically imported mysql::server::installdb from mysql/server/installdb into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/service.pp' in environment production", > "Debug: Automatically imported mysql::server::service from mysql/server/service into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/root_password.pp' in environment production", > "Debug: Automatically imported mysql::server::root_password from mysql/server/root_password into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/providers.pp' in environment production", > "Debug: Automatically imported mysql::server::providers from mysql/server/providers into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/account_security.pp' in environment production", > "Debug: Automatically imported mysql::server::account_security from mysql/server/account_security into production", > "Debug: hiera(): Looking up aodh_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/aodh/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/aodh/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported aodh::db::mysql from aodh/db/mysql into production", > "Debug: hiera(): Looking up aodh::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::collate in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::allowed_hosts in JSON backend", > "Debug: importing '/etc/puppet/modules/aodh/manifests/deps.pp' in environment production", > "Debug: Automatically imported aodh::deps from aodh/deps into production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/db.pp' in environment production", > "Debug: Automatically imported oslo::db from oslo/db into production", > "Debug: importing '/etc/puppet/modules/openstacklib/manifests/policy/base.pp' in environment production", > "Debug: Automatically imported openstacklib::policy::base from openstacklib/policy/base into production", > "Debug: importing '/etc/puppet/modules/openstacklib/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported openstacklib::db::mysql from openstacklib/db/mysql into production", > "Debug: hiera(): Looking up ceilometer_collector_enabled in JSON backend", > "Debug: hiera(): Looking up cinder_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/cinder/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported cinder::db::mysql from cinder/db/mysql into production", > "Debug: hiera(): Looking up cinder::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::collate in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/deps.pp' in environment production", > "Debug: Automatically imported cinder::deps from cinder/deps into production", > "Debug: hiera(): Looking up barbican_api_enabled in JSON backend", > "Debug: hiera(): Looking up congress_enabled in JSON backend", > "Debug: hiera(): Looking up designate_api_enabled in JSON backend", > "Debug: hiera(): Looking up glance_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/glance/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/glance/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported glance::db::mysql from glance/db/mysql into production", > "Debug: hiera(): Looking up glance::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::collate in JSON backend", > "Debug: importing '/etc/puppet/modules/glance/manifests/deps.pp' in environment production", > "Debug: Automatically imported glance::deps from glance/deps into production", > "Debug: hiera(): Looking up gnocchi_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/gnocchi/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/gnocchi/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported gnocchi::db::mysql from gnocchi/db/mysql into production", > "Debug: hiera(): Looking up gnocchi::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::collate in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::allowed_hosts in JSON backend", > "Debug: importing '/etc/puppet/modules/gnocchi/manifests/deps.pp' in environment production", > "Debug: Automatically imported gnocchi::deps from gnocchi/deps into production", > "Debug: hiera(): Looking up heat_engine_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/heat/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/heat/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported heat::db::mysql from heat/db/mysql into production", > "Debug: hiera(): Looking up heat::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::collate in JSON backend", > "Debug: importing '/etc/puppet/modules/heat/manifests/deps.pp' in environment production", > "Debug: Automatically imported heat::deps from heat/deps into production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/cache.pp' in environment production", > "Debug: Automatically imported oslo::cache from oslo/cache into production", > "Debug: hiera(): Looking up ironic_api_enabled in JSON backend", > "Debug: hiera(): Looking up ironic_inspector_enabled in JSON backend", > "Debug: hiera(): Looking up keystone_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/keystone/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/keystone/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported keystone::db::mysql from keystone/db/mysql into production", > "Debug: hiera(): Looking up keystone::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::collate in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::allowed_hosts in JSON backend", > "Debug: importing '/etc/puppet/modules/keystone/manifests/deps.pp' in environment production", > "Debug: Automatically imported keystone::deps from keystone/deps into production", > "Debug: hiera(): Looking up manila_api_enabled in JSON backend", > "Debug: hiera(): Looking up mistral_api_enabled in JSON backend", > "Debug: hiera(): Looking up neutron_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/neutron/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/neutron/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported neutron::db::mysql from neutron/db/mysql into production", > "Debug: hiera(): Looking up neutron::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::collate in JSON backend", > "Debug: importing '/etc/puppet/modules/neutron/manifests/deps.pp' in environment production", > "Debug: Automatically imported neutron::deps from neutron/deps into production", > "Debug: hiera(): Looking up nova_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/nova/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/nova/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported nova::db::mysql from nova/db/mysql into production", > "Debug: hiera(): Looking up nova::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::collate in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::setup_cell0 in JSON backend", > "Debug: importing '/etc/puppet/modules/nova/manifests/deps.pp' in environment production", > "Debug: Automatically imported nova::deps from nova/deps into production", > "Debug: importing '/etc/puppet/modules/nova/manifests/db/mysql_api.pp' in environment production", > "Debug: Automatically imported nova::db::mysql_api from nova/db/mysql_api into production", > "Debug: hiera(): Looking up nova::db::mysql_api::password in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::dbname in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::user in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::host in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::charset in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::collate in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up nova_placement_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/nova/manifests/db/mysql_placement.pp' in environment production", > "Debug: Automatically imported nova::db::mysql_placement from nova/db/mysql_placement into production", > "Debug: hiera(): Looking up nova::db::mysql_placement::password in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::dbname in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::user in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::host in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::charset in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::collate in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up octavia_api_enabled in JSON backend", > "Debug: hiera(): Looking up sahara_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/sahara/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/sahara/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported sahara::db::mysql from sahara/db/mysql into production", > "Debug: hiera(): Looking up sahara::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::collate in JSON backend", > "Debug: importing '/etc/puppet/modules/sahara/manifests/deps.pp' in environment production", > "Debug: Automatically imported sahara::deps from sahara/deps into production", > "Debug: hiera(): Looking up tacker_enabled in JSON backend", > "Debug: hiera(): Looking up trove_api_enabled in JSON backend", > "Debug: hiera(): Looking up panko_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/panko/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/panko/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported panko::db::mysql from panko/db/mysql into production", > "Debug: hiera(): Looking up panko::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::collate in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::allowed_hosts in JSON backend", > "Debug: importing '/etc/puppet/modules/panko/manifests/deps.pp' in environment production", > "Debug: Automatically imported panko::deps from panko/deps into production", > "Debug: hiera(): Looking up ec2_api_enabled in JSON backend", > "Debug: hiera(): Looking up zaqar_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/mysql/manifests/client.pp' in environment production", > "Debug: Automatically imported mysql::client from mysql/client into production", > "Debug: hiera(): Looking up mysql::client::bindings_enable in JSON backend", > "Debug: hiera(): Looking up mysql::client::install_options in JSON backend", > "Debug: hiera(): Looking up mysql::client::package_ensure in JSON backend", > "Debug: hiera(): Looking up mysql::client::package_manage in JSON backend", > "Debug: hiera(): Looking up mysql::client::package_name in JSON backend", > "Debug: importing '/etc/puppet/modules/mysql/manifests/client/install.pp' in environment production", > "Debug: Automatically imported mysql::client::install from mysql/client/install into production", > "Debug: importing '/etc/puppet/modules/openstacklib/manifests/db/mysql/host_access.pp' in environment production", > "Debug: Automatically imported openstacklib::db::mysql::host_access from openstacklib/db/mysql/host_access into production", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[galera] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property-controller-0-galera-role] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_bundle[galera-bundle] with 'before'", > "Debug: Adding relationship from Anchor[mysql::server::start] to Class[Mysql::Server::Install] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Install] to Class[Mysql::Server::Config] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Config] to Class[Mysql::Server::Binarylog] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Binarylog] to Class[Mysql::Server::Installdb] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Installdb] to Class[Mysql::Server::Service] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Service] to Class[Mysql::Server::Root_password] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Root_password] to Class[Mysql::Server::Providers] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Providers] to Anchor[mysql::server::end] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[test] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[aodh] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[cinder] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[glance] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[gnocchi] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[heat] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[keystone] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[ovs_neutron] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_cell0] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_api] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_placement] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[sahara] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[panko] with 'before'", > "Debug: Adding relationship from Anchor[aodh::install::end] to Anchor[aodh::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[aodh::config::end] to Anchor[aodh::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[aodh::db::begin] to Anchor[aodh::db::end] with 'before'", > "Debug: Adding relationship from Anchor[aodh::db::end] to Anchor[aodh::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[aodh::dbsync::begin] to Anchor[aodh::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[aodh::dbsync::end] to Anchor[aodh::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[aodh::install::end] to Anchor[aodh::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[aodh::config::end] to Anchor[aodh::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[aodh::db::begin] to Class[Aodh::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Aodh::Db::Mysql] to Anchor[aodh::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::install::end] to Anchor[cinder::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::end] to Anchor[cinder::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[cinder::db::begin] to Anchor[cinder::db::end] with 'before'", > "Debug: Adding relationship from Anchor[cinder::db::end] to Anchor[cinder::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::dbsync::begin] to Anchor[cinder::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[cinder::dbsync::end] to Anchor[cinder::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::install::end] to Anchor[cinder::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::config::end] to Anchor[cinder::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::db::begin] to Class[Cinder::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Cinder::Db::Mysql] to Anchor[cinder::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[glance::install::end] to Anchor[glance::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[glance::config::end] to Anchor[glance::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[glance::db::begin] to Anchor[glance::db::end] with 'before'", > "Debug: Adding relationship from Anchor[glance::db::end] to Anchor[glance::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[glance::dbsync::begin] to Anchor[glance::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[glance::dbsync::end] to Anchor[glance::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[glance::install::end] to Anchor[glance::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[glance::config::end] to Anchor[glance::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[glance::db::begin] to Class[Glance::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Glance::Db::Mysql] to Anchor[glance::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[gnocchi::install::end] to Anchor[gnocchi::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[gnocchi::config::end] to Anchor[gnocchi::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[gnocchi::db::begin] to Anchor[gnocchi::db::end] with 'before'", > "Debug: Adding relationship from Anchor[gnocchi::db::end] to Anchor[gnocchi::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[gnocchi::dbsync::begin] to Anchor[gnocchi::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[gnocchi::dbsync::end] to Anchor[gnocchi::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[gnocchi::install::end] to Anchor[gnocchi::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[gnocchi::config::end] to Anchor[gnocchi::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[gnocchi::db::begin] to Class[Gnocchi::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Gnocchi::Db::Mysql] to Anchor[gnocchi::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[heat::install::end] to Anchor[heat::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[heat::config::end] to Anchor[heat::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[heat::db::begin] to Anchor[heat::db::end] with 'before'", > "Debug: Adding relationship from Anchor[heat::db::end] to Anchor[heat::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[heat::dbsync::begin] to Anchor[heat::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[heat::dbsync::end] to Anchor[heat::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[heat::install::end] to Anchor[heat::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[heat::config::end] to Anchor[heat::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[heat::db::begin] to Class[Heat::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Heat::Db::Mysql] to Anchor[heat::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[keystone::install::end] to Anchor[keystone::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[keystone::config::end] to Anchor[keystone::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[keystone::db::begin] to Anchor[keystone::db::end] with 'before'", > "Debug: Adding relationship from Anchor[keystone::db::end] to Anchor[keystone::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[keystone::dbsync::begin] to Anchor[keystone::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[keystone::dbsync::end] to Anchor[keystone::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[keystone::install::end] to Anchor[keystone::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[keystone::config::end] to Anchor[keystone::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[keystone::db::begin] to Class[Keystone::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Keystone::Db::Mysql] to Anchor[keystone::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[neutron::install::end] to Anchor[neutron::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[neutron::config::end] to Anchor[neutron::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[neutron::db::begin] to Anchor[neutron::db::end] with 'before'", > "Debug: Adding relationship from Anchor[neutron::db::end] to Anchor[neutron::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[neutron::dbsync::begin] to Anchor[neutron::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[neutron::dbsync::end] to Anchor[neutron::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[neutron::install::end] to Anchor[neutron::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[neutron::config::end] to Anchor[neutron::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[neutron::db::begin] to Class[Neutron::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Neutron::Db::Mysql] to Anchor[neutron::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[nova::install::end] to Anchor[nova::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[nova::config::end] to Anchor[nova::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[nova::db::begin] to Anchor[nova::db::end] with 'before'", > "Debug: Adding relationship from Anchor[nova::db::end] to Anchor[nova::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[nova::install::end] to Anchor[nova::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[nova::config::end] to Anchor[nova::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[nova::dbsync_api::begin] to Anchor[nova::dbsync_api::end] with 'before'", > "Debug: Adding relationship from Anchor[nova::dbsync::begin] to Anchor[nova::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[nova::cell_v2::begin] to Anchor[nova::cell_v2::end] with 'notify'", > "Debug: Adding relationship from Anchor[nova::db_online_data_migrations::begin] to Anchor[nova::db_online_data_migrations::end] with 'before'", > "Debug: Adding relationship from Anchor[nova::db::begin] to Class[Nova::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Nova::Db::Mysql] to Anchor[nova::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[nova::db::begin] to Class[Nova::Db::Mysql_api] with 'notify'", > "Debug: Adding relationship from Class[Nova::Db::Mysql_api] to Anchor[nova::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[nova::db::begin] to Class[Nova::Db::Mysql_placement] with 'notify'", > "Debug: Adding relationship from Class[Nova::Db::Mysql_placement] to Anchor[nova::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[sahara::install::end] to Anchor[sahara::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[sahara::config::end] to Anchor[sahara::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[sahara::db::begin] to Anchor[sahara::db::end] with 'before'", > "Debug: Adding relationship from Anchor[sahara::db::end] to Anchor[sahara::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[sahara::dbsync::begin] to Anchor[sahara::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[sahara::dbsync::end] to Anchor[sahara::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[sahara::install::end] to Anchor[sahara::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[sahara::config::end] to Anchor[sahara::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[sahara::db::begin] to Class[Sahara::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Sahara::Db::Mysql] to Anchor[sahara::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[panko::install::end] to Anchor[panko::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[panko::config::end] to Anchor[panko::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[panko::db::begin] to Anchor[panko::db::end] with 'before'", > "Debug: Adding relationship from Anchor[panko::db::end] to Anchor[panko::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[panko::dbsync::begin] to Anchor[panko::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[panko::dbsync::end] to Anchor[panko::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[panko::install::end] to Anchor[panko::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[panko::config::end] to Anchor[panko::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[panko::db::begin] to Class[Panko::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Panko::Db::Mysql] to Anchor[panko::db::end] with 'notify'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[test] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[aodh] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[cinder] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[glance] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[gnocchi] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[heat] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[keystone] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[ovs_neutron] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[nova] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[nova_cell0] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[nova_api] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[nova_placement] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[sahara] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[panko] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[root@127.0.0.1] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[root@::1] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[@localhost] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[root@localhost.localdomain] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[@localhost.localdomain] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[root@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[root@controller-0] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[@controller-0] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[aodh@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[aodh@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[aodh@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[cinder@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[cinder@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[cinder@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[glance@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[glance@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[glance@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[gnocchi@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[gnocchi@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[gnocchi@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[heat@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[heat@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[heat@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[keystone@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[keystone@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[keystone@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[neutron@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[neutron@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[neutron@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_api@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_api@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_api@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_placement@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_placement@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_placement@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[sahara@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[sahara@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[sahara@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[panko@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[panko@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[panko@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[aodh@%/aodh.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[aodh@172.17.1.17/aodh.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[aodh@172.17.1.15/aodh.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[cinder@%/cinder.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[cinder@172.17.1.17/cinder.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[cinder@172.17.1.15/cinder.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[glance@%/glance.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[glance@172.17.1.17/glance.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[glance@172.17.1.15/glance.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[gnocchi@%/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[gnocchi@172.17.1.17/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[gnocchi@172.17.1.15/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[heat@%/heat.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[heat@172.17.1.17/heat.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[heat@172.17.1.15/heat.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[keystone@%/keystone.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[keystone@172.17.1.17/keystone.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[keystone@172.17.1.15/keystone.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[neutron@%/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[neutron@172.17.1.17/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[neutron@172.17.1.15/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@%/nova.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@172.17.1.17/nova.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@172.17.1.15/nova.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@%/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@172.17.1.17/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@172.17.1.15/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_api@%/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_api@172.17.1.17/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_api@172.17.1.15/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_placement@%/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_placement@172.17.1.17/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_placement@172.17.1.15/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[sahara@%/sahara.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[sahara@172.17.1.17/sahara.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[sahara@172.17.1.15/sahara.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[panko@%/panko.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[panko@172.17.1.17/panko.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[panko@172.17.1.15/panko.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[test] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[aodh] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[cinder] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[glance] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[gnocchi] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[heat] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[keystone] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[ovs_neutron] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[nova] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[nova_cell0] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[nova_api] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[nova_placement] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[sahara] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[panko] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[root@127.0.0.1] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[root@::1] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[@localhost] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[root@localhost.localdomain] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[@localhost.localdomain] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[root@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[root@controller-0] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[@controller-0] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[aodh@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[aodh@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[aodh@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[cinder@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[cinder@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[cinder@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[glance@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[glance@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[glance@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[gnocchi@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[gnocchi@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[gnocchi@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[heat@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[heat@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[heat@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[keystone@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[keystone@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[keystone@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[neutron@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[neutron@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[neutron@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_api@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_api@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_api@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_placement@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_placement@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_placement@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[sahara@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[sahara@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[sahara@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[panko@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[panko@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[panko@172.17.1.15] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[aodh@%/aodh.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[aodh@172.17.1.17/aodh.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[aodh@172.17.1.15/aodh.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[cinder@%/cinder.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[cinder@172.17.1.17/cinder.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[cinder@172.17.1.15/cinder.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[glance@%/glance.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[glance@172.17.1.17/glance.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[glance@172.17.1.15/glance.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[gnocchi@%/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[gnocchi@172.17.1.17/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[gnocchi@172.17.1.15/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[heat@%/heat.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[heat@172.17.1.17/heat.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[heat@172.17.1.15/heat.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[keystone@%/keystone.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[keystone@172.17.1.17/keystone.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[keystone@172.17.1.15/keystone.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[neutron@%/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[neutron@172.17.1.17/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[neutron@172.17.1.15/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@%/nova.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@172.17.1.17/nova.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@172.17.1.15/nova.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@%/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@172.17.1.17/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@172.17.1.15/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_api@%/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_api@172.17.1.17/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_api@172.17.1.15/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_placement@%/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_placement@172.17.1.17/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_placement@172.17.1.15/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[sahara@%/sahara.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[sahara@172.17.1.17/sahara.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[sahara@172.17.1.15/sahara.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[panko@%/panko.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[panko@172.17.1.17/panko.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[panko@172.17.1.15/panko.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[test] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[aodh] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[cinder] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[glance] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[gnocchi] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[heat] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[keystone] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[ovs_neutron] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[nova] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[nova_cell0] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[nova_api] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[nova_placement] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[sahara] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[panko] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[root@127.0.0.1] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[root@::1] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[@localhost] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[root@localhost.localdomain] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[@localhost.localdomain] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[root@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[root@controller-0] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[@controller-0] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[aodh@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[aodh@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[aodh@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[cinder@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[cinder@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[cinder@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[glance@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[glance@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[glance@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[gnocchi@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[gnocchi@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[gnocchi@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[heat@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[heat@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[heat@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[keystone@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[keystone@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[keystone@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[neutron@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[neutron@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[neutron@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_api@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_api@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_api@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_placement@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_placement@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_placement@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[sahara@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[sahara@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[sahara@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[panko@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[panko@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[panko@172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[aodh@%/aodh.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[aodh@172.17.1.17/aodh.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[aodh@172.17.1.15/aodh.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[cinder@%/cinder.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[cinder@172.17.1.17/cinder.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[cinder@172.17.1.15/cinder.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[glance@%/glance.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[glance@172.17.1.17/glance.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[glance@172.17.1.15/glance.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[gnocchi@%/gnocchi.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[gnocchi@172.17.1.17/gnocchi.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[gnocchi@172.17.1.15/gnocchi.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[heat@%/heat.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[heat@172.17.1.17/heat.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[heat@172.17.1.15/heat.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[keystone@%/keystone.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[keystone@172.17.1.17/keystone.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[keystone@172.17.1.15/keystone.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[neutron@%/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[neutron@172.17.1.17/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[neutron@172.17.1.15/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@%/nova.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@172.17.1.17/nova.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@172.17.1.15/nova.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@%/nova_cell0.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@172.17.1.17/nova_cell0.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@172.17.1.15/nova_cell0.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_api@%/nova_api.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_api@172.17.1.17/nova_api.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_api@172.17.1.15/nova_api.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_placement@%/nova_placement.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_placement@172.17.1.17/nova_placement.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_placement@172.17.1.15/nova_placement.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[sahara@%/sahara.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[sahara@172.17.1.17/sahara.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[sahara@172.17.1.15/sahara.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[panko@%/panko.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[panko@172.17.1.17/panko.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[panko@172.17.1.15/panko.*] with 'before'", > "Debug: Adding relationship from Anchor[mysql::client::start] to Class[Mysql::Client::Install] with 'before'", > "Debug: Adding relationship from Class[Mysql::Client::Install] to Anchor[mysql::client::end] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[aodh] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[aodh] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[cinder] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[cinder] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[glance] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[glance] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[gnocchi] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[gnocchi] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[heat] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[heat] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[keystone] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[keystone] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[ovs_neutron] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[ovs_neutron] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[nova] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_cell0] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[nova_cell0] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_api] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[nova_api] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_placement] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[nova_placement] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[sahara] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[sahara] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[panko] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[panko] with 'notify'", > "Debug: Adding relationship from Mysql_database[aodh] to Mysql_user[aodh@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[aodh@%] to Mysql_grant[aodh@%/aodh.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[aodh] to Mysql_user[aodh@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[aodh@172.17.1.17] to Mysql_grant[aodh@172.17.1.17/aodh.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[aodh] to Mysql_user[aodh@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[aodh@172.17.1.15] to Mysql_grant[aodh@172.17.1.15/aodh.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[cinder] to Mysql_user[cinder@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[cinder@%] to Mysql_grant[cinder@%/cinder.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[cinder] to Mysql_user[cinder@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[cinder@172.17.1.17] to Mysql_grant[cinder@172.17.1.17/cinder.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[cinder] to Mysql_user[cinder@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[cinder@172.17.1.15] to Mysql_grant[cinder@172.17.1.15/cinder.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[glance] to Mysql_user[glance@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[glance@%] to Mysql_grant[glance@%/glance.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[glance] to Mysql_user[glance@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[glance@172.17.1.17] to Mysql_grant[glance@172.17.1.17/glance.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[glance] to Mysql_user[glance@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[glance@172.17.1.15] to Mysql_grant[glance@172.17.1.15/glance.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[gnocchi] to Mysql_user[gnocchi@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[gnocchi@%] to Mysql_grant[gnocchi@%/gnocchi.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[gnocchi] to Mysql_user[gnocchi@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[gnocchi@172.17.1.17] to Mysql_grant[gnocchi@172.17.1.17/gnocchi.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[gnocchi] to Mysql_user[gnocchi@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[gnocchi@172.17.1.15] to Mysql_grant[gnocchi@172.17.1.15/gnocchi.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[heat] to Mysql_user[heat@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[heat@%] to Mysql_grant[heat@%/heat.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[heat] to Mysql_user[heat@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[heat@172.17.1.17] to Mysql_grant[heat@172.17.1.17/heat.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[heat] to Mysql_user[heat@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[heat@172.17.1.15] to Mysql_grant[heat@172.17.1.15/heat.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[keystone] to Mysql_user[keystone@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[keystone@%] to Mysql_grant[keystone@%/keystone.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[keystone] to Mysql_user[keystone@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[keystone@172.17.1.17] to Mysql_grant[keystone@172.17.1.17/keystone.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[keystone] to Mysql_user[keystone@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[keystone@172.17.1.15] to Mysql_grant[keystone@172.17.1.15/keystone.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[ovs_neutron] to Mysql_user[neutron@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[neutron@%] to Mysql_grant[neutron@%/ovs_neutron.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[ovs_neutron] to Mysql_user[neutron@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[neutron@172.17.1.17] to Mysql_grant[neutron@172.17.1.17/ovs_neutron.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[ovs_neutron] to Mysql_user[neutron@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[neutron@172.17.1.15] to Mysql_grant[neutron@172.17.1.15/ovs_neutron.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova] to Mysql_user[nova@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@%] to Mysql_grant[nova@%/nova.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova] to Mysql_user[nova@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@172.17.1.17] to Mysql_grant[nova@172.17.1.17/nova.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova] to Mysql_user[nova@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@172.17.1.15] to Mysql_grant[nova@172.17.1.15/nova.*] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@%] to Mysql_grant[nova@%/nova_cell0.*] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@172.17.1.17] to Mysql_grant[nova@172.17.1.17/nova_cell0.*] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@172.17.1.15] to Mysql_grant[nova@172.17.1.15/nova_cell0.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_api] to Mysql_user[nova_api@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_api@%] to Mysql_grant[nova_api@%/nova_api.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_api] to Mysql_user[nova_api@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_api@172.17.1.17] to Mysql_grant[nova_api@172.17.1.17/nova_api.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_api] to Mysql_user[nova_api@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_api@172.17.1.15] to Mysql_grant[nova_api@172.17.1.15/nova_api.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_placement] to Mysql_user[nova_placement@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_placement@%] to Mysql_grant[nova_placement@%/nova_placement.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_placement] to Mysql_user[nova_placement@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_placement@172.17.1.17] to Mysql_grant[nova_placement@172.17.1.17/nova_placement.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_placement] to Mysql_user[nova_placement@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_placement@172.17.1.15] to Mysql_grant[nova_placement@172.17.1.15/nova_placement.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[sahara] to Mysql_user[sahara@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[sahara@%] to Mysql_grant[sahara@%/sahara.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[sahara] to Mysql_user[sahara@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[sahara@172.17.1.17] to Mysql_grant[sahara@172.17.1.17/sahara.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[sahara] to Mysql_user[sahara@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[sahara@172.17.1.15] to Mysql_grant[sahara@172.17.1.15/sahara.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[panko] to Mysql_user[panko@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[panko@%] to Mysql_grant[panko@%/panko.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[panko] to Mysql_user[panko@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[panko@172.17.1.17] to Mysql_grant[panko@172.17.1.17/panko.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[panko] to Mysql_user[panko@172.17.1.15] with 'notify'", > "Debug: Adding relationship from Mysql_user[panko@172.17.1.15] to Mysql_grant[panko@172.17.1.15/panko.*] with 'notify'", > "Debug: File[mysql-config-file]: Adding default for owner", > "Debug: File[mysql-config-file]: Adding default for group", > "Debug: File[/etc/my.cnf.d]: Adding default for owner", > "Debug: File[/etc/my.cnf.d]: Adding default for group", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.54 seconds", > "Info: Applying configuration version '1537533153'", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[galera]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property-controller-0-galera-role]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_bundle[galera-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[test]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[root@127.0.0.1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[root@::1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[@localhost]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[root@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[root@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[root@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[aodh@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[aodh@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[aodh@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[cinder@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[cinder@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[cinder@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[glance@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[glance@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[glance@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[gnocchi@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[gnocchi@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[gnocchi@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[heat@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[heat@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[heat@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[keystone@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[keystone@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[keystone@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[neutron@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[neutron@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[neutron@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_api@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_api@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_api@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_placement@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_placement@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_placement@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[sahara@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[sahara@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[sahara@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[panko@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[panko@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[panko@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[aodh@%/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[aodh@172.17.1.17/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[aodh@172.17.1.15/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[cinder@%/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[cinder@172.17.1.17/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[cinder@172.17.1.15/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[glance@%/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[glance@172.17.1.17/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[glance@172.17.1.15/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[gnocchi@%/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[gnocchi@172.17.1.17/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[gnocchi@172.17.1.15/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[heat@%/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[heat@172.17.1.17/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[heat@172.17.1.15/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[keystone@%/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[keystone@172.17.1.17/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[keystone@172.17.1.15/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[neutron@%/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[neutron@172.17.1.17/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[neutron@172.17.1.15/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@%/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@172.17.1.17/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@172.17.1.15/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@%/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@172.17.1.17/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@172.17.1.15/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_api@%/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_api@172.17.1.17/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_api@172.17.1.15/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_placement@%/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_placement@172.17.1.17/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_placement@172.17.1.15/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[sahara@%/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[sahara@172.17.1.17/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[sahara@172.17.1.15/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[panko@%/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[panko@172.17.1.17/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[panko@172.17.1.15/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[test]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[root@127.0.0.1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[root@::1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[@localhost]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[root@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[root@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[root@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[aodh@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[aodh@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[aodh@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[cinder@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[cinder@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[cinder@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[glance@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[glance@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[glance@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[gnocchi@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[gnocchi@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[gnocchi@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[heat@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[heat@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[heat@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[keystone@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[keystone@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[keystone@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[neutron@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[neutron@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[neutron@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_api@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_api@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_api@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_placement@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_placement@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_placement@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[sahara@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[sahara@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[sahara@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[panko@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[panko@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[panko@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[aodh@%/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[aodh@172.17.1.17/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[aodh@172.17.1.15/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[cinder@%/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[cinder@172.17.1.17/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[cinder@172.17.1.15/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[glance@%/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[glance@172.17.1.17/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[glance@172.17.1.15/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[gnocchi@%/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[gnocchi@172.17.1.17/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[gnocchi@172.17.1.15/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[heat@%/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[heat@172.17.1.17/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[heat@172.17.1.15/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[keystone@%/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[keystone@172.17.1.17/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[keystone@172.17.1.15/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[neutron@%/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[neutron@172.17.1.17/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[neutron@172.17.1.15/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@%/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@172.17.1.17/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@172.17.1.15/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@%/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@172.17.1.17/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@172.17.1.15/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_api@%/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_api@172.17.1.17/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_api@172.17.1.15/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_placement@%/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_placement@172.17.1.17/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_placement@172.17.1.15/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[sahara@%/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[sahara@172.17.1.17/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[sahara@172.17.1.15/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[panko@%/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[panko@172.17.1.17/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[panko@172.17.1.15/panko.*]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[test]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Mysql::Server::Config/before: subscribes to Class[Mysql::Server::Binarylog]", > "Debug: /Stage[main]/Mysql::Server::Install/before: subscribes to Class[Mysql::Server::Config]", > "Debug: /Stage[main]/Mysql::Server::Binarylog/before: subscribes to Class[Mysql::Server::Installdb]", > "Debug: /Stage[main]/Mysql::Server::Installdb/before: subscribes to Class[Mysql::Server::Service]", > "Debug: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]/require: subscribes to Mysql_datadir[/var/lib/mysql]", > "Debug: /Stage[main]/Mysql::Server::Service/before: subscribes to Class[Mysql::Server::Root_password]", > "Debug: /Stage[main]/Mysql::Server::Root_password/before: subscribes to Class[Mysql::Server::Providers]", > "Debug: /Stage[main]/Mysql::Server::Providers/before: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@127.0.0.1]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@::1]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@localhost]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@%]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@localhost.localdomain]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@localhost.localdomain]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@controller-0.localdomain]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@controller-0.localdomain]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@controller-0]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@controller-0]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_database[test]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server/Anchor[mysql::server::start]/before: subscribes to Class[Mysql::Server::Install]", > "Debug: /Stage[main]/Aodh::Db::Mysql/notify: subscribes to Anchor[aodh::db::end]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::end]/before: subscribes to Anchor[aodh::config::begin]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::end]/notify: subscribes to Anchor[aodh::service::begin]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::end]/before: subscribes to Anchor[aodh::db::begin]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::end]/notify: subscribes to Anchor[aodh::service::begin]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::begin]/before: subscribes to Anchor[aodh::db::end]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::begin]/notify: subscribes to Class[Aodh::Db::Mysql]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::end]/notify: subscribes to Anchor[aodh::dbsync::begin]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::begin]/before: subscribes to Anchor[aodh::dbsync::end]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::end]/notify: subscribes to Anchor[aodh::service::begin]", > "Debug: /Stage[main]/Cinder::Db::Mysql/notify: subscribes to Anchor[cinder::db::end]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]/before: subscribes to Anchor[cinder::config::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]/notify: subscribes to Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]/before: subscribes to Anchor[cinder::db::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]/notify: subscribes to Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::begin]/before: subscribes to Anchor[cinder::db::end]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::begin]/notify: subscribes to Class[Cinder::Db::Mysql]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::end]/notify: subscribes to Anchor[cinder::dbsync::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::begin]/before: subscribes to Anchor[cinder::dbsync::end]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::end]/notify: subscribes to Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Glance::Db::Mysql/notify: subscribes to Anchor[glance::db::end]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::end]/before: subscribes to Anchor[glance::config::begin]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::end]/notify: subscribes to Anchor[glance::service::begin]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::end]/before: subscribes to Anchor[glance::db::begin]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::end]/notify: subscribes to Anchor[glance::service::begin]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::begin]/before: subscribes to Anchor[glance::db::end]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::begin]/notify: subscribes to Class[Glance::Db::Mysql]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::end]/notify: subscribes to Anchor[glance::dbsync::begin]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::begin]/before: subscribes to Anchor[glance::dbsync::end]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::end]/notify: subscribes to Anchor[glance::service::begin]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/notify: subscribes to Anchor[gnocchi::db::end]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::end]/before: subscribes to Anchor[gnocchi::config::begin]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::end]/notify: subscribes to Anchor[gnocchi::service::begin]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::end]/before: subscribes to Anchor[gnocchi::db::begin]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::end]/notify: subscribes to Anchor[gnocchi::service::begin]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::begin]/before: subscribes to Anchor[gnocchi::db::end]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::begin]/notify: subscribes to Class[Gnocchi::Db::Mysql]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::end]/notify: subscribes to Anchor[gnocchi::dbsync::begin]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::begin]/before: subscribes to Anchor[gnocchi::dbsync::end]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::end]/notify: subscribes to Anchor[gnocchi::service::begin]", > "Debug: /Stage[main]/Heat::Db::Mysql/notify: subscribes to Anchor[heat::db::end]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::end]/before: subscribes to Anchor[heat::config::begin]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::end]/notify: subscribes to Anchor[heat::service::begin]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::end]/before: subscribes to Anchor[heat::db::begin]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::end]/notify: subscribes to Anchor[heat::service::begin]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::begin]/before: subscribes to Anchor[heat::db::end]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::begin]/notify: subscribes to Class[Heat::Db::Mysql]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::end]/notify: subscribes to Anchor[heat::dbsync::begin]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::begin]/before: subscribes to Anchor[heat::dbsync::end]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::end]/notify: subscribes to Anchor[heat::service::begin]", > "Debug: /Stage[main]/Keystone::Db::Mysql/notify: subscribes to Anchor[keystone::db::end]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::end]/before: subscribes to Anchor[keystone::config::begin]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::end]/notify: subscribes to Anchor[keystone::service::begin]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::end]/before: subscribes to Anchor[keystone::db::begin]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::end]/notify: subscribes to Anchor[keystone::service::begin]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::begin]/before: subscribes to Anchor[keystone::db::end]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::begin]/notify: subscribes to Class[Keystone::Db::Mysql]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::end]/notify: subscribes to Anchor[keystone::dbsync::begin]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::begin]/before: subscribes to Anchor[keystone::dbsync::end]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::end]/notify: subscribes to Anchor[keystone::service::begin]", > "Debug: /Stage[main]/Neutron::Db::Mysql/notify: subscribes to Anchor[neutron::db::end]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::end]/before: subscribes to Anchor[neutron::config::begin]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::end]/notify: subscribes to Anchor[neutron::service::begin]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::end]/before: subscribes to Anchor[neutron::db::begin]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::end]/notify: subscribes to Anchor[neutron::service::begin]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::begin]/before: subscribes to Anchor[neutron::db::end]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::begin]/notify: subscribes to Class[Neutron::Db::Mysql]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::end]/notify: subscribes to Anchor[neutron::dbsync::begin]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::begin]/before: subscribes to Anchor[neutron::dbsync::end]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::end]/notify: subscribes to Anchor[neutron::service::begin]", > "Debug: /Stage[main]/Nova::Db::Mysql/notify: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::end]/before: subscribes to Anchor[nova::config::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::end]/before: subscribes to Anchor[nova::db::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]/before: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]/notify: subscribes to Class[Nova::Db::Mysql]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]/notify: subscribes to Class[Nova::Db::Mysql_api]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]/notify: subscribes to Class[Nova::Db::Mysql_placement]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::begin]/subscribe: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::begin]/before: subscribes to Anchor[nova::dbsync_api::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]/subscribe: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]/subscribe: subscribes to Anchor[nova::dbsync_api::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]/before: subscribes to Anchor[nova::dbsync::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::begin]/subscribe: subscribes to Anchor[nova::dbsync_api::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::begin]/notify: subscribes to Anchor[nova::cell_v2::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::end]/notify: subscribes to Anchor[nova::dbsync::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::begin]/subscribe: subscribes to Anchor[nova::dbsync_api::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::begin]/before: subscribes to Anchor[nova::db_online_data_migrations::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/notify: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/notify: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Sahara::Db::Mysql/notify: subscribes to Anchor[sahara::db::end]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::end]/before: subscribes to Anchor[sahara::config::begin]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::end]/notify: subscribes to Anchor[sahara::service::begin]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::end]/before: subscribes to Anchor[sahara::db::begin]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::end]/notify: subscribes to Anchor[sahara::service::begin]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::begin]/before: subscribes to Anchor[sahara::db::end]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::begin]/notify: subscribes to Class[Sahara::Db::Mysql]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::end]/notify: subscribes to Anchor[sahara::dbsync::begin]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::begin]/before: subscribes to Anchor[sahara::dbsync::end]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::end]/notify: subscribes to Anchor[sahara::service::begin]", > "Debug: /Stage[main]/Panko::Db::Mysql/notify: subscribes to Anchor[panko::db::end]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::end]/before: subscribes to Anchor[panko::config::begin]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::end]/notify: subscribes to Anchor[panko::service::begin]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::end]/before: subscribes to Anchor[panko::db::begin]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::end]/notify: subscribes to Anchor[panko::service::begin]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::begin]/before: subscribes to Anchor[panko::db::end]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::begin]/notify: subscribes to Class[Panko::Db::Mysql]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::end]/notify: subscribes to Anchor[panko::dbsync::begin]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::begin]/before: subscribes to Anchor[panko::dbsync::end]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::end]/notify: subscribes to Anchor[panko::service::begin]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Property[galera-role-controller-0]/before: subscribes to Pacemaker::Resource::Bundle[galera-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Ocf[galera]/require: subscribes to Class[Mysql::Server]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Ocf[galera]/require: subscribes to Pacemaker::Resource::Bundle[galera-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Ocf[galera]/before: subscribes to Exec[galera-ready]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[test]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[root@127.0.0.1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[root@::1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[@localhost]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[root@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[root@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[root@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[aodh@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[aodh@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[aodh@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[cinder@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[cinder@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[cinder@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[glance@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[glance@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[glance@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[gnocchi@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[gnocchi@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[gnocchi@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[heat@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[heat@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[heat@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[keystone@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[keystone@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[keystone@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[neutron@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[neutron@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[neutron@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_api@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_api@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_api@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_placement@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_placement@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_placement@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[sahara@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[sahara@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[sahara@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[panko@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[panko@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[panko@172.17.1.15]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[aodh@%/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[aodh@172.17.1.17/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[aodh@172.17.1.15/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[cinder@%/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[cinder@172.17.1.17/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[cinder@172.17.1.15/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[glance@%/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[glance@172.17.1.17/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[glance@172.17.1.15/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[gnocchi@%/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[gnocchi@172.17.1.17/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[gnocchi@172.17.1.15/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[heat@%/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[heat@172.17.1.17/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[heat@172.17.1.15/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[keystone@%/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[keystone@172.17.1.17/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[keystone@172.17.1.15/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[neutron@%/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[neutron@172.17.1.17/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[neutron@172.17.1.15/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@%/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@172.17.1.17/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@172.17.1.15/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@%/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@172.17.1.17/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@172.17.1.15/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_api@%/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_api@172.17.1.17/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_api@172.17.1.15/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_placement@%/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_placement@172.17.1.17/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_placement@172.17.1.15/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[sahara@%/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[sahara@172.17.1.17/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[sahara@172.17.1.15/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[panko@%/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[panko@172.17.1.17/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[panko@172.17.1.15/panko.*]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Mysql::Client::Install/before: subscribes to Anchor[mysql::client::end]", > "Debug: /Stage[main]/Mysql::Client/Anchor[mysql::client::start]/before: subscribes to Class[Mysql::Client::Install]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Mysql_database[aodh]/notify: subscribes to Mysql_user[aodh@%]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Mysql_database[aodh]/notify: subscribes to Mysql_user[aodh@172.17.1.17]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Mysql_database[aodh]/notify: subscribes to Mysql_user[aodh@172.17.1.15]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Mysql_database[cinder]/notify: subscribes to Mysql_user[cinder@%]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Mysql_database[cinder]/notify: subscribes to Mysql_user[cinder@172.17.1.17]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Mysql_database[cinder]/notify: subscribes to Mysql_user[cinder@172.17.1.15]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Mysql_database[glance]/notify: subscribes to Mysql_user[glance@%]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Mysql_database[glance]/notify: subscribes to Mysql_user[glance@172.17.1.17]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Mysql_database[glance]/notify: subscribes to Mysql_user[glance@172.17.1.15]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Mysql_database[gnocchi]/notify: subscribes to Mysql_user[gnocchi@%]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Mysql_database[gnocchi]/notify: subscribes to Mysql_user[gnocchi@172.17.1.17]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Mysql_database[gnocchi]/notify: subscribes to Mysql_user[gnocchi@172.17.1.15]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Mysql_database[heat]/notify: subscribes to Mysql_user[heat@%]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Mysql_database[heat]/notify: subscribes to Mysql_user[heat@172.17.1.17]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Mysql_database[heat]/notify: subscribes to Mysql_user[heat@172.17.1.15]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Mysql_database[keystone]/notify: subscribes to Mysql_user[keystone@%]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Mysql_database[keystone]/notify: subscribes to Mysql_user[keystone@172.17.1.17]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Mysql_database[keystone]/notify: subscribes to Mysql_user[keystone@172.17.1.15]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[ovs_neutron]/notify: subscribes to Mysql_user[neutron@%]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[ovs_neutron]/notify: subscribes to Mysql_user[neutron@172.17.1.17]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[ovs_neutron]/notify: subscribes to Mysql_user[neutron@172.17.1.15]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Mysql_database[nova]/notify: subscribes to Mysql_user[nova@%]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Mysql_database[nova]/notify: subscribes to Mysql_user[nova@172.17.1.17]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Mysql_database[nova]/notify: subscribes to Mysql_user[nova@172.17.1.15]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Mysql_database[nova_api]/notify: subscribes to Mysql_user[nova_api@%]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Mysql_database[nova_api]/notify: subscribes to Mysql_user[nova_api@172.17.1.17]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Mysql_database[nova_api]/notify: subscribes to Mysql_user[nova_api@172.17.1.15]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Mysql_database[nova_placement]/notify: subscribes to Mysql_user[nova_placement@%]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Mysql_database[nova_placement]/notify: subscribes to Mysql_user[nova_placement@172.17.1.17]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Mysql_database[nova_placement]/notify: subscribes to Mysql_user[nova_placement@172.17.1.15]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Mysql_database[sahara]/notify: subscribes to Mysql_user[sahara@%]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Mysql_database[sahara]/notify: subscribes to Mysql_user[sahara@172.17.1.17]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Mysql_database[sahara]/notify: subscribes to Mysql_user[sahara@172.17.1.15]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Mysql_database[panko]/notify: subscribes to Mysql_user[panko@%]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Mysql_database[panko]/notify: subscribes to Mysql_user[panko@172.17.1.17]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Mysql_database[panko]/notify: subscribes to Mysql_user[panko@172.17.1.15]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_user[aodh@%]/notify: subscribes to Mysql_grant[aodh@%/aodh.*]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.17]/Mysql_user[aodh@172.17.1.17]/notify: subscribes to Mysql_grant[aodh@172.17.1.17/aodh.*]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.15]/Mysql_user[aodh@172.17.1.15]/notify: subscribes to Mysql_grant[aodh@172.17.1.15/aodh.*]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_%]/Mysql_user[cinder@%]/notify: subscribes to Mysql_grant[cinder@%/cinder.*]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.17]/Mysql_user[cinder@172.17.1.17]/notify: subscribes to Mysql_grant[cinder@172.17.1.17/cinder.*]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.15]/Mysql_user[cinder@172.17.1.15]/notify: subscribes to Mysql_grant[cinder@172.17.1.15/cinder.*]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_user[glance@%]/notify: subscribes to Mysql_grant[glance@%/glance.*]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.17]/Mysql_user[glance@172.17.1.17]/notify: subscribes to Mysql_grant[glance@172.17.1.17/glance.*]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.15]/Mysql_user[glance@172.17.1.15]/notify: subscribes to Mysql_grant[glance@172.17.1.15/glance.*]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_%]/Mysql_user[gnocchi@%]/notify: subscribes to Mysql_grant[gnocchi@%/gnocchi.*]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.17]/Mysql_user[gnocchi@172.17.1.17]/notify: subscribes to Mysql_grant[gnocchi@172.17.1.17/gnocchi.*]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.15]/Mysql_user[gnocchi@172.17.1.15]/notify: subscribes to Mysql_grant[gnocchi@172.17.1.15/gnocchi.*]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_user[heat@%]/notify: subscribes to Mysql_grant[heat@%/heat.*]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.17]/Mysql_user[heat@172.17.1.17]/notify: subscribes to Mysql_grant[heat@172.17.1.17/heat.*]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.15]/Mysql_user[heat@172.17.1.15]/notify: subscribes to Mysql_grant[heat@172.17.1.15/heat.*]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_user[keystone@%]/notify: subscribes to Mysql_grant[keystone@%/keystone.*]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.17]/Mysql_user[keystone@172.17.1.17]/notify: subscribes to Mysql_grant[keystone@172.17.1.17/keystone.*]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.15]/Mysql_user[keystone@172.17.1.15]/notify: subscribes to Mysql_grant[keystone@172.17.1.15/keystone.*]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]/Mysql_user[neutron@%]/notify: subscribes to Mysql_grant[neutron@%/ovs_neutron.*]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.17]/Mysql_user[neutron@172.17.1.17]/notify: subscribes to Mysql_grant[neutron@172.17.1.17/ovs_neutron.*]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.15]/Mysql_user[neutron@172.17.1.15]/notify: subscribes to Mysql_grant[neutron@172.17.1.15/ovs_neutron.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_user[nova@%]/notify: subscribes to Mysql_grant[nova@%/nova.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_user[nova@%]/notify: subscribes to Mysql_grant[nova@%/nova_cell0.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17]/Mysql_user[nova@172.17.1.17]/notify: subscribes to Mysql_grant[nova@172.17.1.17/nova.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17]/Mysql_user[nova@172.17.1.17]/notify: subscribes to Mysql_grant[nova@172.17.1.17/nova_cell0.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15]/Mysql_user[nova@172.17.1.15]/notify: subscribes to Mysql_grant[nova@172.17.1.15/nova.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15]/Mysql_user[nova@172.17.1.15]/notify: subscribes to Mysql_grant[nova@172.17.1.15/nova_cell0.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_user[nova_api@%]/notify: subscribes to Mysql_grant[nova_api@%/nova_api.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.17]/Mysql_user[nova_api@172.17.1.17]/notify: subscribes to Mysql_grant[nova_api@172.17.1.17/nova_api.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.15]/Mysql_user[nova_api@172.17.1.15]/notify: subscribes to Mysql_grant[nova_api@172.17.1.15/nova_api.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_%]/Mysql_user[nova_placement@%]/notify: subscribes to Mysql_grant[nova_placement@%/nova_placement.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.17]/Mysql_user[nova_placement@172.17.1.17]/notify: subscribes to Mysql_grant[nova_placement@172.17.1.17/nova_placement.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.15]/Mysql_user[nova_placement@172.17.1.15]/notify: subscribes to Mysql_grant[nova_placement@172.17.1.15/nova_placement.*]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_%]/Mysql_user[sahara@%]/notify: subscribes to Mysql_grant[sahara@%/sahara.*]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.17]/Mysql_user[sahara@172.17.1.17]/notify: subscribes to Mysql_grant[sahara@172.17.1.17/sahara.*]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.15]/Mysql_user[sahara@172.17.1.15]/notify: subscribes to Mysql_grant[sahara@172.17.1.15/sahara.*]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_%]/Mysql_user[panko@%]/notify: subscribes to Mysql_grant[panko@%/panko.*]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.17]/Mysql_user[panko@172.17.1.17]/notify: subscribes to Mysql_grant[panko@172.17.1.17/panko.*]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.15]/Mysql_user[panko@172.17.1.15]/notify: subscribes to Mysql_grant[panko@172.17.1.15/panko.*]", > "Debug: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]: Adding autorequire relationship with File[/etc/my.cnf.d]", > "Debug: /Stage[main]/Mysql::Server::Installdb/Mysql_datadir[/var/lib/mysql]: Adding autorequire relationship with Package[mysql-server]", > "Debug: Stage[main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Settings]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Systemd::Unit_file[docker.service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Stonith]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Pacemaker::Property[Disable STONITH]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Resource_defaults]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/ensure: defined content as '{md5}45315a4298fe7ee61818e38c304b810f'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]: The container Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{md5}0df3f6bc676cf9b7c80a6b9d1de45820'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]: The container Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle] will propagate my refresh event", > "Debug: Class[Tripleo::Profile::Base::Database::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Tripleo::Profile::Base::Database::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Server/Anchor[mysql::server::start]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Server/Anchor[mysql::server::start]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Install]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Server::Install/Package[mysql-server]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Server::Install/Package[mysql-server]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Config]: Resource is being skipped, unscheduling all events", > "Info: Computing checksum on file /etc/my.cnf.d/galera.cnf", > "Info: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]: Filebucketed /etc/my.cnf.d/galera.cnf to puppet with sum af90358207ccfecae7af249d5ef7dd3e", > "Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content: content changed '{md5}af90358207ccfecae7af249d5ef7dd3e' to '{md5}8c6cfba441ae40b019726afa035445cd'", > "Debug: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]: The container Class[Mysql::Server::Config] will propagate my refresh event", > "Info: Class[Mysql::Server::Config]: Unscheduling all events on Class[Mysql::Server::Config]", > "Debug: Class[Mysql::Server::Binarylog]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Binarylog]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Installdb]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Installdb]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Server::Installdb/Mysql_datadir[/var/lib/mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Server::Installdb/Mysql_datadir[/var/lib/mysql]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]/ensure: created", > "Debug: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]: The container Class[Mysql::Server::Installdb] will propagate my refresh event", > "Info: Class[Mysql::Server::Installdb]: Unscheduling all events on Class[Mysql::Server::Installdb]", > "Debug: Class[Mysql::Server::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Service]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Root_password]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Root_password]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Server::Root_password/Exec[remove install pass]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Server::Root_password/Exec[remove install pass]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Providers]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Providers]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Server/Anchor[mysql::server::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Server/Anchor[mysql::server::end]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Account_security]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Account_security]: Resource is being skipped, unscheduling all events", > "Debug: Class[Aodh::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Aodh::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Aodh::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Aodh::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[aodh]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[aodh]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Cinder::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Cinder::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[cinder]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[cinder]: Resource is being skipped, unscheduling all events", > "Debug: Class[Glance::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Glance::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Glance::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Glance::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[glance]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[glance]: Resource is being skipped, unscheduling all events", > "Debug: Class[Gnocchi::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Gnocchi::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Gnocchi::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Gnocchi::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[gnocchi]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[gnocchi]: Resource is being skipped, unscheduling all events", > "Debug: Class[Heat::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Heat::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Heat::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Heat::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[heat]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[heat]: Resource is being skipped, unscheduling all events", > "Debug: Class[Keystone::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Keystone::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Keystone::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Keystone::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[keystone]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[keystone]: Resource is being skipped, unscheduling all events", > "Debug: Class[Neutron::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Neutron::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Neutron::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Neutron::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[neutron]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[neutron]: Resource is being skipped, unscheduling all events", > "Debug: Class[Nova::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Nova::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Nova::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Nova::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[nova]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[nova]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[nova_cell0]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[nova_cell0]: Resource is being skipped, unscheduling all events", > "Debug: Class[Nova::Db::Mysql_api]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Nova::Db::Mysql_api]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[nova_api]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[nova_api]: Resource is being skipped, unscheduling all events", > "Debug: Class[Nova::Db::Mysql_placement]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Nova::Db::Mysql_placement]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[nova_placement]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Class[Sahara::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Sahara::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Sahara::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Sahara::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[sahara]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[sahara]: Resource is being skipped, unscheduling all events", > "Debug: Class[Panko::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Panko::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Panko::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Panko::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[panko]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[panko]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[galera-role-controller-0]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Pacemaker::Property[galera-role-controller-0]: Resource is being skipped, unscheduling all events", > "Debug: Class[Systemd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-248x9e returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-248x9e property show | grep stonith-enabled | grep false > /dev/null 2>&1", > "Debug: Class[Mysql::Client]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Client]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Client/Anchor[mysql::client::start]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Client/Anchor[mysql::client::start]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Client::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Client::Install]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Client::Install/Package[mysql_client]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Client::Install/Package[mysql_client]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Client/Anchor[mysql::client::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Client/Anchor[mysql::client::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.15]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.15]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1lwrpp1 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1lwrpp1 property show | grep galera-role | grep controller-0 | grep true > /dev/null 2>&1", > "Debug: property exists: property show | grep galera-role | grep controller-0 | grep true > /dev/null 2>&1 -> false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1bh9vev returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1bh9vev property set --node controller-0 galera-role=true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1bh9vev diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1bh9vev.orig returned 0 -> CIB updated", > "Debug: property create: property set --node controller-0 galera-role=true -> ", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Property[galera-role-controller-0]/Pcmk_property[property-controller-0-galera-role]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Property[galera-role-controller-0]/Pcmk_property[property-controller-0-galera-role]: The container Pacemaker::Property[galera-role-controller-0] will propagate my refresh event", > "Info: Pacemaker::Property[galera-role-controller-0]: Unscheduling all events on Pacemaker::Property[galera-role-controller-0]", > "Debug: Pacemaker::Resource::Bundle[galera-bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Pacemaker::Resource::Bundle[galera-bundle]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1hwmlwy returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1hwmlwy constraint list | grep location-galera-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-199mory returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-199mory resource show galera-bundle > /dev/null 2>&1", > "Debug: Exists: bundle galera-bundle exists 1 location exists 1 deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1odqez1 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1odqez1 resource bundle create galera-bundle container docker image=192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest replicas=1 masters=1 options=\"--user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" run-command=\"/bin/bash /usr/local/bin/kolla_start\" network=host storage-map id=mysql-cfg-files source-dir=/var/lib/kolla/config_files/mysql.json target-dir=/var/lib/kolla/config_files/config.json options=ro storage-map id=mysql-cfg-data source-dir=/var/lib/config-data/puppet-generated/mysql/ target-dir=/var/lib/kolla/config_files/src options=ro storage-map id=mysql-hosts source-dir=/etc/hosts target-dir=/etc/hosts options=ro storage-map id=mysql-localtime source-dir=/etc/localtime target-dir=/etc/localtime options=ro storage-map id=mysql-lib source-dir=/var/lib/mysql target-dir=/var/lib/mysql options=rw storage-map id=mysql-log-mariadb source-dir=/var/log/mariadb target-dir=/var/log/mariadb options=rw storage-map id=mysql-log source-dir=/var/log/containers/mysql target-dir=/var/log/mysql options=rw storage-map id=mysql-dev-log source-dir=/dev/log target-dir=/dev/log options=rw network control-port=3123 --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1odqez1 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1odqez1.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location galera-bundle rule resource-discovery=exclusive score=0 galera-role eq true", > "Debug: location_rule_create: constraint location galera-bundle rule resource-discovery=exclusive score=0 galera-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-y9nonp returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-y9nonp constraint location galera-bundle rule resource-discovery=exclusive score=0 galera-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-y9nonp diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-y9nonp.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1x5bb2b returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1x5bb2b resource enable galera-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1x5bb2b diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1x5bb2b.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Bundle[galera-bundle]/Pcmk_bundle[galera-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Bundle[galera-bundle]/Pcmk_bundle[galera-bundle]: The container Pacemaker::Resource::Bundle[galera-bundle] will propagate my refresh event", > "Info: Pacemaker::Resource::Bundle[galera-bundle]: Unscheduling all events on Pacemaker::Resource::Bundle[galera-bundle]", > "Debug: Pacemaker::Resource::Ocf[galera]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Pacemaker::Resource::Ocf[galera]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-v7umdn returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-v7umdn constraint list | grep location-galera-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1kdfn1i returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1kdfn1i resource show galera > /dev/null 2>&1", > "Debug: Exists: resource galera exists 1 location exists 0 resource deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-smrx5o returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-smrx5o resource create galera ocf:heartbeat:galera log='/var/log/mysql/mysqld.log' additional_parameters='--open-files-limit=16384' enable_creation=true wsrep_cluster_address='gcomm://controller-0.internalapi.localdomain' cluster_host_map='controller-0:controller-0.internalapi.localdomain' meta master-max=1 ordered=true container-attribute-target=host op promote timeout=300s on-fail=block bundle galera-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-smrx5o diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-smrx5o.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Ocf[galera]/Pcmk_resource[galera]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Ocf[galera]/Pcmk_resource[galera]: The container Pacemaker::Resource::Ocf[galera] will propagate my refresh event", > "Info: Pacemaker::Resource::Ocf[galera]: Unscheduling all events on Pacemaker::Resource::Ocf[galera]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: Exec try 1/180", > "Debug: Exec[galera-ready](provider=posix): Executing '/usr/bin/clustercheck >/dev/null'", > "Debug: Executing: '/usr/bin/clustercheck >/dev/null'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: Sleeping for 10 seconds between tries", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: Exec try 2/180", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: Exec try 3/180", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: Exec try 4/180", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: executed successfully", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]: The container Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle] will propagate my refresh event", > "Info: Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle]: Unscheduling all events on Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle]", > "Debug: Prefetching mysql resources for mysql_user", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT CONCAT(User, '@',Host) AS User FROM mysql.user'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'root@%''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'root@127.0.0.1''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'root@::1''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'root@controller-0''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'clustercheck@localhost''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'root@localhost''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e DROP USER IF EXISTS 'root'@'127.0.0.1''", > "Notice: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@127.0.0.1]/ensure: removed", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@127.0.0.1]: The container Class[Mysql::Server::Account_security] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e DROP USER IF EXISTS 'root'@'::1''", > "Notice: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@::1]/ensure: removed", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@::1]: The container Class[Mysql::Server::Account_security] will propagate my refresh event", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@localhost]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@%]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@localhost.localdomain]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@localhost.localdomain]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@controller-0.localdomain]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@controller-0.localdomain]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e DROP USER IF EXISTS 'root'@'controller-0''", > "Notice: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@controller-0]/ensure: removed", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@controller-0]: The container Class[Mysql::Server::Account_security] will propagate my refresh event", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@controller-0]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: Prefetching mysql resources for mysql_database", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe show databases'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe show variables like '%_database' information_schema'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe show variables like '%_database' mysql'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe show variables like '%_database' performance_schema'", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_database[test]: Nothing to manage: no ensure and the resource doesn't exist", > "Info: Class[Mysql::Server::Account_security]: Unscheduling all events on Class[Mysql::Server::Account_security]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `aodh` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Mysql_database[aodh]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Mysql_database[aodh]: The container Openstacklib::Db::Mysql[aodh] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `cinder` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Mysql_database[cinder]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Mysql_database[cinder]: The container Openstacklib::Db::Mysql[cinder] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `glance` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Mysql_database[glance]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Mysql_database[glance]: The container Openstacklib::Db::Mysql[glance] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `gnocchi` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Mysql_database[gnocchi]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Mysql_database[gnocchi]: The container Openstacklib::Db::Mysql[gnocchi] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `heat` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Mysql_database[heat]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Mysql_database[heat]: The container Openstacklib::Db::Mysql[heat] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `keystone` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Mysql_database[keystone]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Mysql_database[keystone]: The container Openstacklib::Db::Mysql[keystone] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `ovs_neutron` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[ovs_neutron]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[ovs_neutron]: The container Openstacklib::Db::Mysql[neutron] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `nova` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Mysql_database[nova]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Mysql_database[nova]: The container Openstacklib::Db::Mysql[nova] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `nova_cell0` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Mysql_database[nova_cell0]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Mysql_database[nova_cell0]: The container Openstacklib::Db::Mysql[nova_cell0] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `nova_api` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Mysql_database[nova_api]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Mysql_database[nova_api]: The container Openstacklib::Db::Mysql[nova_api] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `nova_placement` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Mysql_database[nova_placement]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Mysql_database[nova_placement]: The container Openstacklib::Db::Mysql[nova_placement] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `sahara` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Mysql_database[sahara]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Mysql_database[sahara]: The container Openstacklib::Db::Mysql[sahara] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `panko` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Mysql_database[panko]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Mysql_database[panko]: The container Openstacklib::Db::Mysql[panko] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'aodh'@'%' IDENTIFIED BY PASSWORD '*A749899EE444D80129CAB157939F737E4CE6EC12''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_user[aodh@%]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_user[aodh@%]: The container Openstacklib::Db::Mysql::Host_access[aodh_%] will propagate my refresh event", > "Debug: Prefetching mysql resources for mysql_grant", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SHOW GRANTS FOR 'aodh'@'%';'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SHOW GRANTS FOR 'root'@'%';'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SHOW GRANTS FOR 'clustercheck'@'localhost';'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SHOW GRANTS FOR 'root'@'localhost';'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `aodh`.* TO 'aodh'@'%''", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_grant[aodh@%/aodh.*]/ensure: created", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe FLUSH PRIVILEGES'", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_grant[aodh@%/aodh.*]: The container Openstacklib::Db::Mysql::Host_access[aodh_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[aodh_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[aodh_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'aodh'@'172.17.1.17' IDENTIFIED BY PASSWORD '*A749899EE444D80129CAB157939F737E4CE6EC12''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.17]/Mysql_user[aodh@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.17]/Mysql_user[aodh@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `aodh`.* TO 'aodh'@'172.17.1.17''", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.17]/Mysql_grant[aodh@172.17.1.17/aodh.*]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.17]/Mysql_grant[aodh@172.17.1.17/aodh.*]: The container Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.17]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'aodh'@'172.17.1.15' IDENTIFIED BY PASSWORD '*A749899EE444D80129CAB157939F737E4CE6EC12''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.15]/Mysql_user[aodh@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.15]/Mysql_user[aodh@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `aodh`.* TO 'aodh'@'172.17.1.15''", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.15]/Mysql_grant[aodh@172.17.1.15/aodh.*]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.15]/Mysql_grant[aodh@172.17.1.15/aodh.*]: The container Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[aodh]: Unscheduling all events on Openstacklib::Db::Mysql[aodh]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'cinder'@'%' IDENTIFIED BY PASSWORD '*3B9B4B9104220A1D611E96101FE8B74BEC42EEF5''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_%]/Mysql_user[cinder@%]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_%]/Mysql_user[cinder@%]: The container Openstacklib::Db::Mysql::Host_access[cinder_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `cinder`.* TO 'cinder'@'%''", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_%]/Mysql_grant[cinder@%/cinder.*]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_%]/Mysql_grant[cinder@%/cinder.*]: The container Openstacklib::Db::Mysql::Host_access[cinder_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[cinder_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[cinder_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'cinder'@'172.17.1.17' IDENTIFIED BY PASSWORD '*3B9B4B9104220A1D611E96101FE8B74BEC42EEF5''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.17]/Mysql_user[cinder@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.17]/Mysql_user[cinder@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `cinder`.* TO 'cinder'@'172.17.1.17''", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.17]/Mysql_grant[cinder@172.17.1.17/cinder.*]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.17]/Mysql_grant[cinder@172.17.1.17/cinder.*]: The container Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.17]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'cinder'@'172.17.1.15' IDENTIFIED BY PASSWORD '*3B9B4B9104220A1D611E96101FE8B74BEC42EEF5''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.15]/Mysql_user[cinder@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.15]/Mysql_user[cinder@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `cinder`.* TO 'cinder'@'172.17.1.15''", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.15]/Mysql_grant[cinder@172.17.1.15/cinder.*]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.15]/Mysql_grant[cinder@172.17.1.15/cinder.*]: The container Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[cinder]: Unscheduling all events on Openstacklib::Db::Mysql[cinder]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'glance'@'%' IDENTIFIED BY PASSWORD '*2DCECDA4CEDBD99B146877D525F20FFB66B2444A''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_user[glance@%]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_user[glance@%]: The container Openstacklib::Db::Mysql::Host_access[glance_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `glance`.* TO 'glance'@'%''", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_grant[glance@%/glance.*]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_grant[glance@%/glance.*]: The container Openstacklib::Db::Mysql::Host_access[glance_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[glance_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[glance_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'glance'@'172.17.1.17' IDENTIFIED BY PASSWORD '*2DCECDA4CEDBD99B146877D525F20FFB66B2444A''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.17]/Mysql_user[glance@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.17]/Mysql_user[glance@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[glance_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `glance`.* TO 'glance'@'172.17.1.17''", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.17]/Mysql_grant[glance@172.17.1.17/glance.*]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.17]/Mysql_grant[glance@172.17.1.17/glance.*]: The container Openstacklib::Db::Mysql::Host_access[glance_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[glance_172.17.1.17]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'glance'@'172.17.1.15' IDENTIFIED BY PASSWORD '*2DCECDA4CEDBD99B146877D525F20FFB66B2444A''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.15]/Mysql_user[glance@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.15]/Mysql_user[glance@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[glance_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `glance`.* TO 'glance'@'172.17.1.15''", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.15]/Mysql_grant[glance@172.17.1.15/glance.*]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.15]/Mysql_grant[glance@172.17.1.15/glance.*]: The container Openstacklib::Db::Mysql::Host_access[glance_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[glance_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[glance]: Unscheduling all events on Openstacklib::Db::Mysql[glance]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'gnocchi'@'%' IDENTIFIED BY PASSWORD '*C2EC8026EF8C1475B7689949733E9BCA29D4717A''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_%]/Mysql_user[gnocchi@%]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_%]/Mysql_user[gnocchi@%]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `gnocchi`.* TO 'gnocchi'@'%''", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_%]/Mysql_grant[gnocchi@%/gnocchi.*]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_%]/Mysql_grant[gnocchi@%/gnocchi.*]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[gnocchi_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[gnocchi_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'gnocchi'@'172.17.1.17' IDENTIFIED BY PASSWORD '*C2EC8026EF8C1475B7689949733E9BCA29D4717A''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.17]/Mysql_user[gnocchi@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.17]/Mysql_user[gnocchi@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `gnocchi`.* TO 'gnocchi'@'172.17.1.17''", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.17]/Mysql_grant[gnocchi@172.17.1.17/gnocchi.*]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.17]/Mysql_grant[gnocchi@172.17.1.17/gnocchi.*]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.17]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'gnocchi'@'172.17.1.15' IDENTIFIED BY PASSWORD '*C2EC8026EF8C1475B7689949733E9BCA29D4717A''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.15]/Mysql_user[gnocchi@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.15]/Mysql_user[gnocchi@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `gnocchi`.* TO 'gnocchi'@'172.17.1.15''", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.15]/Mysql_grant[gnocchi@172.17.1.15/gnocchi.*]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.15]/Mysql_grant[gnocchi@172.17.1.15/gnocchi.*]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[gnocchi]: Unscheduling all events on Openstacklib::Db::Mysql[gnocchi]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'heat'@'%' IDENTIFIED BY PASSWORD '*E54E96AF91520932280D634CF9AD2FE48431D484''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_user[heat@%]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_user[heat@%]: The container Openstacklib::Db::Mysql::Host_access[heat_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `heat`.* TO 'heat'@'%''", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_grant[heat@%/heat.*]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_grant[heat@%/heat.*]: The container Openstacklib::Db::Mysql::Host_access[heat_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[heat_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[heat_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'heat'@'172.17.1.17' IDENTIFIED BY PASSWORD '*E54E96AF91520932280D634CF9AD2FE48431D484''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.17]/Mysql_user[heat@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.17]/Mysql_user[heat@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[heat_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `heat`.* TO 'heat'@'172.17.1.17''", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.17]/Mysql_grant[heat@172.17.1.17/heat.*]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.17]/Mysql_grant[heat@172.17.1.17/heat.*]: The container Openstacklib::Db::Mysql::Host_access[heat_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[heat_172.17.1.17]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'heat'@'172.17.1.15' IDENTIFIED BY PASSWORD '*E54E96AF91520932280D634CF9AD2FE48431D484''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.15]/Mysql_user[heat@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.15]/Mysql_user[heat@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[heat_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `heat`.* TO 'heat'@'172.17.1.15''", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.15]/Mysql_grant[heat@172.17.1.15/heat.*]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.15]/Mysql_grant[heat@172.17.1.15/heat.*]: The container Openstacklib::Db::Mysql::Host_access[heat_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[heat_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[heat]: Unscheduling all events on Openstacklib::Db::Mysql[heat]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'keystone'@'%' IDENTIFIED BY PASSWORD '*F3C2F0E71C584D7A77C0E537B424E56D6FFDED61''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_user[keystone@%]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_user[keystone@%]: The container Openstacklib::Db::Mysql::Host_access[keystone_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `keystone`.* TO 'keystone'@'%''", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_grant[keystone@%/keystone.*]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_grant[keystone@%/keystone.*]: The container Openstacklib::Db::Mysql::Host_access[keystone_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[keystone_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[keystone_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'keystone'@'172.17.1.17' IDENTIFIED BY PASSWORD '*F3C2F0E71C584D7A77C0E537B424E56D6FFDED61''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.17]/Mysql_user[keystone@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.17]/Mysql_user[keystone@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `keystone`.* TO 'keystone'@'172.17.1.17''", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.17]/Mysql_grant[keystone@172.17.1.17/keystone.*]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.17]/Mysql_grant[keystone@172.17.1.17/keystone.*]: The container Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.17]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'keystone'@'172.17.1.15' IDENTIFIED BY PASSWORD '*F3C2F0E71C584D7A77C0E537B424E56D6FFDED61''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.15]/Mysql_user[keystone@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.15]/Mysql_user[keystone@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `keystone`.* TO 'keystone'@'172.17.1.15''", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.15]/Mysql_grant[keystone@172.17.1.15/keystone.*]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.15]/Mysql_grant[keystone@172.17.1.15/keystone.*]: The container Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[keystone]: Unscheduling all events on Openstacklib::Db::Mysql[keystone]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'neutron'@'%' IDENTIFIED BY PASSWORD '*B7B78E36061DB0A1EE1D180EA1337641861121C9''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]/Mysql_user[neutron@%]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]/Mysql_user[neutron@%]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `ovs_neutron`.* TO 'neutron'@'%''", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]/Mysql_grant[neutron@%/ovs_neutron.*]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]/Mysql_grant[neutron@%/ovs_neutron.*]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'neutron'@'172.17.1.17' IDENTIFIED BY PASSWORD '*B7B78E36061DB0A1EE1D180EA1337641861121C9''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.17]/Mysql_user[neutron@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.17]/Mysql_user[neutron@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `ovs_neutron`.* TO 'neutron'@'172.17.1.17''", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.17]/Mysql_grant[neutron@172.17.1.17/ovs_neutron.*]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.17]/Mysql_grant[neutron@172.17.1.17/ovs_neutron.*]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.17]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'neutron'@'172.17.1.15' IDENTIFIED BY PASSWORD '*B7B78E36061DB0A1EE1D180EA1337641861121C9''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.15]/Mysql_user[neutron@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.15]/Mysql_user[neutron@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `ovs_neutron`.* TO 'neutron'@'172.17.1.15''", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.15]/Mysql_grant[neutron@172.17.1.15/ovs_neutron.*]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.15]/Mysql_grant[neutron@172.17.1.15/ovs_neutron.*]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[neutron]: Unscheduling all events on Openstacklib::Db::Mysql[neutron]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova'@'%' IDENTIFIED BY PASSWORD '*0E8B5F890C4DBC2DCDB5A73C7276E69C9E88107B''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_user[nova@%]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_user[nova@%]: The container Openstacklib::Db::Mysql::Host_access[nova_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova`.* TO 'nova'@'%''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_grant[nova@%/nova.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_grant[nova@%/nova.*]: The container Openstacklib::Db::Mysql::Host_access[nova_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova'@'172.17.1.17' IDENTIFIED BY PASSWORD '*0E8B5F890C4DBC2DCDB5A73C7276E69C9E88107B''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17]/Mysql_user[nova@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17]/Mysql_user[nova@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova`.* TO 'nova'@'172.17.1.17''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17]/Mysql_grant[nova@172.17.1.17/nova.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17]/Mysql_grant[nova@172.17.1.17/nova.*]: The container Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova'@'172.17.1.15' IDENTIFIED BY PASSWORD '*0E8B5F890C4DBC2DCDB5A73C7276E69C9E88107B''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15]/Mysql_user[nova@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15]/Mysql_user[nova@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova`.* TO 'nova'@'172.17.1.15''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15]/Mysql_grant[nova@172.17.1.15/nova.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15]/Mysql_grant[nova@172.17.1.15/nova.*]: The container Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[nova]: Unscheduling all events on Openstacklib::Db::Mysql[nova]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_cell0`.* TO 'nova'@'%''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_%]/Mysql_grant[nova@%/nova_cell0.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_%]/Mysql_grant[nova@%/nova_cell0.*]: The container Openstacklib::Db::Mysql::Host_access[nova_cell0_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_cell0_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_cell0_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_cell0`.* TO 'nova'@'172.17.1.17''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.17]/Mysql_grant[nova@172.17.1.17/nova_cell0.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.17]/Mysql_grant[nova@172.17.1.17/nova_cell0.*]: The container Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.17]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_cell0`.* TO 'nova'@'172.17.1.15''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.15]/Mysql_grant[nova@172.17.1.15/nova_cell0.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.15]/Mysql_grant[nova@172.17.1.15/nova_cell0.*]: The container Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[nova_cell0]: Unscheduling all events on Openstacklib::Db::Mysql[nova_cell0]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_api'@'%' IDENTIFIED BY PASSWORD '*0E8B5F890C4DBC2DCDB5A73C7276E69C9E88107B''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_user[nova_api@%]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_user[nova_api@%]: The container Openstacklib::Db::Mysql::Host_access[nova_api_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_api`.* TO 'nova_api'@'%''", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_grant[nova_api@%/nova_api.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_grant[nova_api@%/nova_api.*]: The container Openstacklib::Db::Mysql::Host_access[nova_api_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_api_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_api_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_api'@'172.17.1.17' IDENTIFIED BY PASSWORD '*0E8B5F890C4DBC2DCDB5A73C7276E69C9E88107B''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.17]/Mysql_user[nova_api@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.17]/Mysql_user[nova_api@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_api`.* TO 'nova_api'@'172.17.1.17''", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.17]/Mysql_grant[nova_api@172.17.1.17/nova_api.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.17]/Mysql_grant[nova_api@172.17.1.17/nova_api.*]: The container Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.17]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_api'@'172.17.1.15' IDENTIFIED BY PASSWORD '*0E8B5F890C4DBC2DCDB5A73C7276E69C9E88107B''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.15]/Mysql_user[nova_api@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.15]/Mysql_user[nova_api@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_api`.* TO 'nova_api'@'172.17.1.15''", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.15]/Mysql_grant[nova_api@172.17.1.15/nova_api.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.15]/Mysql_grant[nova_api@172.17.1.15/nova_api.*]: The container Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[nova_api]: Unscheduling all events on Openstacklib::Db::Mysql[nova_api]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_placement'@'%' IDENTIFIED BY PASSWORD '*0E8B5F890C4DBC2DCDB5A73C7276E69C9E88107B''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_%]/Mysql_user[nova_placement@%]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_%]/Mysql_user[nova_placement@%]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_placement`.* TO 'nova_placement'@'%''", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_%]/Mysql_grant[nova_placement@%/nova_placement.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_%]/Mysql_grant[nova_placement@%/nova_placement.*]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_placement_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_placement_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_placement'@'172.17.1.17' IDENTIFIED BY PASSWORD '*0E8B5F890C4DBC2DCDB5A73C7276E69C9E88107B''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.17]/Mysql_user[nova_placement@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.17]/Mysql_user[nova_placement@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_placement`.* TO 'nova_placement'@'172.17.1.17''", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.17]/Mysql_grant[nova_placement@172.17.1.17/nova_placement.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.17]/Mysql_grant[nova_placement@172.17.1.17/nova_placement.*]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.17]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_placement'@'172.17.1.15' IDENTIFIED BY PASSWORD '*0E8B5F890C4DBC2DCDB5A73C7276E69C9E88107B''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.15]/Mysql_user[nova_placement@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.15]/Mysql_user[nova_placement@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_placement`.* TO 'nova_placement'@'172.17.1.15''", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.15]/Mysql_grant[nova_placement@172.17.1.15/nova_placement.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.15]/Mysql_grant[nova_placement@172.17.1.15/nova_placement.*]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[nova_placement]: Unscheduling all events on Openstacklib::Db::Mysql[nova_placement]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'sahara'@'%' IDENTIFIED BY PASSWORD '*A4CB49F6B84D2024DEF94CE3E31AEFBD04A02753''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_%]/Mysql_user[sahara@%]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_%]/Mysql_user[sahara@%]: The container Openstacklib::Db::Mysql::Host_access[sahara_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `sahara`.* TO 'sahara'@'%''", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_%]/Mysql_grant[sahara@%/sahara.*]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_%]/Mysql_grant[sahara@%/sahara.*]: The container Openstacklib::Db::Mysql::Host_access[sahara_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[sahara_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[sahara_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'sahara'@'172.17.1.17' IDENTIFIED BY PASSWORD '*A4CB49F6B84D2024DEF94CE3E31AEFBD04A02753''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.17]/Mysql_user[sahara@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.17]/Mysql_user[sahara@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `sahara`.* TO 'sahara'@'172.17.1.17''", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.17]/Mysql_grant[sahara@172.17.1.17/sahara.*]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.17]/Mysql_grant[sahara@172.17.1.17/sahara.*]: The container Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.17]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'sahara'@'172.17.1.15' IDENTIFIED BY PASSWORD '*A4CB49F6B84D2024DEF94CE3E31AEFBD04A02753''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.15]/Mysql_user[sahara@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.15]/Mysql_user[sahara@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `sahara`.* TO 'sahara'@'172.17.1.15''", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.15]/Mysql_grant[sahara@172.17.1.15/sahara.*]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.15]/Mysql_grant[sahara@172.17.1.15/sahara.*]: The container Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[sahara]: Unscheduling all events on Openstacklib::Db::Mysql[sahara]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'panko'@'%' IDENTIFIED BY PASSWORD '*83FD22C45B659CB2E22EC4268A9213E2DA8C4DCB''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_%]/Mysql_user[panko@%]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_%]/Mysql_user[panko@%]: The container Openstacklib::Db::Mysql::Host_access[panko_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `panko`.* TO 'panko'@'%''", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_%]/Mysql_grant[panko@%/panko.*]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_%]/Mysql_grant[panko@%/panko.*]: The container Openstacklib::Db::Mysql::Host_access[panko_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[panko_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[panko_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'panko'@'172.17.1.17' IDENTIFIED BY PASSWORD '*83FD22C45B659CB2E22EC4268A9213E2DA8C4DCB''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.17]/Mysql_user[panko@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.17]/Mysql_user[panko@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[panko_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `panko`.* TO 'panko'@'172.17.1.17''", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.17]/Mysql_grant[panko@172.17.1.17/panko.*]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.17]/Mysql_grant[panko@172.17.1.17/panko.*]: The container Openstacklib::Db::Mysql::Host_access[panko_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[panko_172.17.1.17]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'panko'@'172.17.1.15' IDENTIFIED BY PASSWORD '*83FD22C45B659CB2E22EC4268A9213E2DA8C4DCB''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'172.17.1.15' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'172.17.1.15' REQUIRE NONE'", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.15]/Mysql_user[panko@172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.15]/Mysql_user[panko@172.17.1.15]: The container Openstacklib::Db::Mysql::Host_access[panko_172.17.1.15] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `panko`.* TO 'panko'@'172.17.1.15''", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.15]/Mysql_grant[panko@172.17.1.15/panko.*]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.15]/Mysql_grant[panko@172.17.1.15/panko.*]: The container Openstacklib::Db::Mysql::Host_access[panko_172.17.1.15] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.15]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[panko_172.17.1.15]", > "Info: Openstacklib::Db::Mysql[panko]: Unscheduling all events on Openstacklib::Db::Mysql[panko]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Schedule[hourly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Schedule[daily]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Schedule[weekly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Schedule[monthly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Schedule[never]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Filebucket[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Finishing transaction 37203520", > "Debug: Stored state in 0.01 seconds", > "Notice: Applied catalog in 77.96 seconds", > " Total: 103", > " Success: 103", > " Changed: 103", > " Out of sync: 103", > " Skipped: 136", > " Total: 250", > " Mysql database: 0.27", > " Mysql grant: 1.23", > " Mysql user: 1.47", > " Pcmk resource: 11.43", > " Last run: 1537533236", > " Pcmk bundle: 20.16", > " Exec: 32.05", > " Config retrieval: 4.88", > " Total: 81.52", > " Pcmk property: 9.99", > " Config: 1537533153", > "Debug: Finishing transaction 48678100", > "+ TAGS=file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", > "+ CONFIG='include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle'", > "+ puppet apply --debug --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user -e 'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle'", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/pacemaker/database/mysql_bundle.pp\", 133]:[\"unknown\", 1]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 103]:[\"unknown\", 1]", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/aodh/manifests/db/mysql.pp\", 57]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 175]", > "Warning: ModuleLoader: module 'cinder' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'sahara' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'openstacklib' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/openstacklib/manifests/db/mysql/host_access.pp\", 43]:", > "stdout: Info: Loading facts", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.27 seconds", > "Info: Applying configuration version '1537533245'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::L3_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[l3_haproxy_process_wrapper]/File[/var/lib/neutron/l3_haproxy_wrapper]/ensure: defined content as '{md5}ca84cfcf013b6bc5aaafd7cc45aae326'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[l3_haproxy_process_wrapper]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[l3_haproxy_process_wrapper]", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::L3_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Keepalived[l3_keepalived]/File[/var/lib/neutron/keepalived_wrapper]/ensure: defined content as '{md5}cf208743a889491570e45e5922aaa85c'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Keepalived[l3_keepalived]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Keepalived[l3_keepalived]", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::L3_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Keepalived_state_change[l3_keepalived_state_change]/File[/var/lib/neutron/keepalived_state_change_wrapper]/ensure: defined content as '{md5}f72bfec5dc1c16b968223450454f78bf'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Keepalived_state_change[l3_keepalived_state_change]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Keepalived_state_change[l3_keepalived_state_change]", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::L3_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Dibbler_client[l3_dibbler_daemon]/File[/var/lib/neutron/dibbler_wrapper]/ensure: defined content as '{md5}544aabbd07d99df22f9900642e7999ed'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Dibbler_client[l3_dibbler_daemon]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Dibbler_client[l3_dibbler_daemon]", > "Notice: Applied catalog in 0.02 seconds", > " Total: 4", > " Success: 4", > " Total: 11", > " Out of sync: 4", > " Changed: 4", > " Skipped: 7", > " File: 0.01", > " Config retrieval: 0.40", > " Total: 0.40", > " Last run: 1537533245", > " Config: 1537533245", > "stderr: + STEP=4", > "+ TAGS=file", > "+ CONFIG='include ::tripleo::profile::base::neutron::l3_agent_wrappers'", > "+ EXTRA_ARGS=", > "+ echo '{\"step\": 4}'", > "+ puppet apply --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file -e 'include ::tripleo::profile::base::neutron::l3_agent_wrappers'", > "PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5)", > "PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.28 seconds", > "Info: Applying configuration version '1537533251'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::Dhcp_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Dnsmasq[dhcp_dnsmasq_process_wrapper]/File[/var/lib/neutron/dnsmasq_wrapper]/ensure: defined content as '{md5}015ab051c326134ff93cfab1b89672a7'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Dnsmasq[dhcp_dnsmasq_process_wrapper]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Dnsmasq[dhcp_dnsmasq_process_wrapper]", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::Dhcp_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[dhcp_haproxy_process_wrapper]/File[/var/lib/neutron/dhcp_haproxy_wrapper]/ensure: defined content as '{md5}f7d85a242b5046aeac0513b9fa7382ca'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[dhcp_haproxy_process_wrapper]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[dhcp_haproxy_process_wrapper]", > "Notice: Applied catalog in 0.01 seconds", > " Total: 2", > " Success: 2", > " Changed: 2", > " Out of sync: 2", > " Total: 9", > " File: 0.00", > " Last run: 1537533252", > " Config: 1537533251", > "+ CONFIG='include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'", > "+ puppet apply --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file -e 'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'", > "stderr: Error: unable to find resource 'redis-bundle'", > "stdout: ecc0c86f6446c67d938d5970177353417046f45be83c754952690bb355600a56", > "stdout: 30f1328cc73f2f23c59bdaf0a678375defe326221bef56425960a15ec5b41f26", > "stdout: 003387abd638787fb69b6f626da4a373ae35dae8c7436f7d3ea9889c55efe830", > "stderr: Error: unable to find resource 'haproxy-bundle'", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/pacemaker/database/redis_bundle.pp' in environment production", > "Debug: Automatically imported tripleo::profile::pacemaker::database::redis_bundle from tripleo/profile/pacemaker/database/redis_bundle into production", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::certificate_specs in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::redis_docker_image in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::redis_docker_control_port in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::redis_network in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::extra_config_file in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::tls_tunnel_local_name in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::tls_tunnel_base_port in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::tls_proxy_bind_ip in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::tls_proxy_fqdn in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::tls_proxy_port in JSON backend", > "Debug: hiera(): Looking up redis_certificate_specs in JSON backend", > "Debug: hiera(): Looking up redis_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::control_port in JSON backend", > "Debug: hiera(): Looking up redis_network in JSON backend", > "Debug: hiera(): Looking up redis_file_limit in JSON backend", > "Debug: importing '/etc/puppet/modules/redis/manifests/init.pp' in environment production", > "Debug: Automatically imported redis from redis into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/params.pp' in environment production", > "Debug: Automatically imported redis::params from redis/params into production", > "Debug: hiera(): Looking up redis::activerehashing in JSON backend", > "Debug: hiera(): Looking up redis::aof_load_truncated in JSON backend", > "Debug: hiera(): Looking up redis::aof_rewrite_incremental_fsync in JSON backend", > "Debug: hiera(): Looking up redis::appendfilename in JSON backend", > "Debug: hiera(): Looking up redis::appendfsync in JSON backend", > "Debug: hiera(): Looking up redis::appendonly in JSON backend", > "Debug: hiera(): Looking up redis::auto_aof_rewrite_min_size in JSON backend", > "Debug: hiera(): Looking up redis::auto_aof_rewrite_percentage in JSON backend", > "Debug: hiera(): Looking up redis::bind in JSON backend", > "Debug: hiera(): Looking up redis::output_buffer_limit_slave in JSON backend", > "Debug: hiera(): Looking up redis::output_buffer_limit_pubsub in JSON backend", > "Debug: hiera(): Looking up redis::conf_template in JSON backend", > "Debug: hiera(): Looking up redis::config_dir in JSON backend", > "Debug: hiera(): Looking up redis::config_dir_mode in JSON backend", > "Debug: hiera(): Looking up redis::config_file in JSON backend", > "Debug: hiera(): Looking up redis::config_file_mode in JSON backend", > "Debug: hiera(): Looking up redis::config_file_orig in JSON backend", > "Debug: hiera(): Looking up redis::config_group in JSON backend", > "Debug: hiera(): Looking up redis::config_owner in JSON backend", > "Debug: hiera(): Looking up redis::daemonize in JSON backend", > "Debug: hiera(): Looking up redis::databases in JSON backend", > "Debug: hiera(): Looking up redis::default_install in JSON backend", > "Debug: hiera(): Looking up redis::dbfilename in JSON backend", > "Debug: hiera(): Looking up redis::extra_config_file in JSON backend", > "Debug: hiera(): Looking up redis::hash_max_ziplist_entries in JSON backend", > "Debug: hiera(): Looking up redis::hash_max_ziplist_value in JSON backend", > "Debug: hiera(): Looking up redis::hll_sparse_max_bytes in JSON backend", > "Debug: hiera(): Looking up redis::hz in JSON backend", > "Debug: hiera(): Looking up redis::latency_monitor_threshold in JSON backend", > "Debug: hiera(): Looking up redis::list_max_ziplist_entries in JSON backend", > "Debug: hiera(): Looking up redis::list_max_ziplist_value in JSON backend", > "Debug: hiera(): Looking up redis::log_dir in JSON backend", > "Debug: hiera(): Looking up redis::log_dir_mode in JSON backend", > "Debug: hiera(): Looking up redis::log_file in JSON backend", > "Debug: hiera(): Looking up redis::log_level in JSON backend", > "Debug: hiera(): Looking up redis::manage_package in JSON backend", > "Debug: hiera(): Looking up redis::manage_repo in JSON backend", > "Debug: hiera(): Looking up redis::masterauth in JSON backend", > "Debug: hiera(): Looking up redis::maxclients in JSON backend", > "Debug: hiera(): Looking up redis::maxmemory in JSON backend", > "Debug: hiera(): Looking up redis::maxmemory_policy in JSON backend", > "Debug: hiera(): Looking up redis::maxmemory_samples in JSON backend", > "Debug: hiera(): Looking up redis::min_slaves_max_lag in JSON backend", > "Debug: hiera(): Looking up redis::min_slaves_to_write in JSON backend", > "Debug: hiera(): Looking up redis::no_appendfsync_on_rewrite in JSON backend", > "Debug: hiera(): Looking up redis::notify_keyspace_events in JSON backend", > "Debug: hiera(): Looking up redis::notify_service in JSON backend", > "Debug: hiera(): Looking up redis::managed_by_cluster_manager in JSON backend", > "Debug: hiera(): Looking up redis::package_ensure in JSON backend", > "Debug: hiera(): Looking up redis::package_name in JSON backend", > "Debug: hiera(): Looking up redis::pid_file in JSON backend", > "Debug: hiera(): Looking up redis::port in JSON backend", > "Debug: hiera(): Looking up redis::protected_mode in JSON backend", > "Debug: hiera(): Looking up redis::ppa_repo in JSON backend", > "Debug: hiera(): Looking up redis::rdbcompression in JSON backend", > "Debug: hiera(): Looking up redis::repl_backlog_size in JSON backend", > "Debug: hiera(): Looking up redis::repl_backlog_ttl in JSON backend", > "Debug: hiera(): Looking up redis::repl_disable_tcp_nodelay in JSON backend", > "Debug: hiera(): Looking up redis::repl_ping_slave_period in JSON backend", > "Debug: hiera(): Looking up redis::repl_timeout in JSON backend", > "Debug: hiera(): Looking up redis::requirepass in JSON backend", > "Debug: hiera(): Looking up redis::save_db_to_disk in JSON backend", > "Debug: hiera(): Looking up redis::save_db_to_disk_interval in JSON backend", > "Debug: hiera(): Looking up redis::service_enable in JSON backend", > "Debug: hiera(): Looking up redis::service_ensure in JSON backend", > "Debug: hiera(): Looking up redis::service_group in JSON backend", > "Debug: hiera(): Looking up redis::service_hasrestart in JSON backend", > "Debug: hiera(): Looking up redis::service_hasstatus in JSON backend", > "Debug: hiera(): Looking up redis::service_manage in JSON backend", > "Debug: hiera(): Looking up redis::service_name in JSON backend", > "Debug: hiera(): Looking up redis::service_provider in JSON backend", > "Debug: hiera(): Looking up redis::service_user in JSON backend", > "Debug: hiera(): Looking up redis::set_max_intset_entries in JSON backend", > "Debug: hiera(): Looking up redis::slave_priority in JSON backend", > "Debug: hiera(): Looking up redis::slave_read_only in JSON backend", > "Debug: hiera(): Looking up redis::slave_serve_stale_data in JSON backend", > "Debug: hiera(): Looking up redis::slaveof in JSON backend", > "Debug: hiera(): Looking up redis::slowlog_log_slower_than in JSON backend", > "Debug: hiera(): Looking up redis::slowlog_max_len in JSON backend", > "Debug: hiera(): Looking up redis::stop_writes_on_bgsave_error in JSON backend", > "Debug: hiera(): Looking up redis::syslog_enabled in JSON backend", > "Debug: hiera(): Looking up redis::syslog_facility in JSON backend", > "Debug: hiera(): Looking up redis::tcp_backlog in JSON backend", > "Debug: hiera(): Looking up redis::tcp_keepalive in JSON backend", > "Debug: hiera(): Looking up redis::timeout in JSON backend", > "Debug: hiera(): Looking up redis::unixsocket in JSON backend", > "Debug: hiera(): Looking up redis::unixsocketperm in JSON backend", > "Debug: hiera(): Looking up redis::ulimit in JSON backend", > "Debug: hiera(): Looking up redis::workdir in JSON backend", > "Debug: hiera(): Looking up redis::workdir_mode in JSON backend", > "Debug: hiera(): Looking up redis::zset_max_ziplist_entries in JSON backend", > "Debug: hiera(): Looking up redis::zset_max_ziplist_value in JSON backend", > "Debug: hiera(): Looking up redis::cluster_enabled in JSON backend", > "Debug: hiera(): Looking up redis::cluster_config_file in JSON backend", > "Debug: hiera(): Looking up redis::cluster_node_timeout in JSON backend", > "Debug: importing '/etc/puppet/modules/redis/manifests/preinstall.pp' in environment production", > "Debug: Automatically imported redis::preinstall from redis/preinstall into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/install.pp' in environment production", > "Debug: Automatically imported redis::install from redis/install into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/config.pp' in environment production", > "Debug: Automatically imported redis::config from redis/config into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/instance.pp' in environment production", > "Debug: Automatically imported redis::instance from redis/instance into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/ulimit.pp' in environment production", > "Debug: Automatically imported redis::ulimit from redis/ulimit into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/service.pp' in environment production", > "Debug: Automatically imported redis::service from redis/service into production", > "Debug: hiera(): Looking up redis_short_node_names in JSON backend", > "Debug: Scope(Redis::Instance[default]): Retrieving template redis/redis.conf.3.2.erb", > "Debug: template[/etc/puppet/modules/redis/templates/redis.conf.3.2.erb]: Bound template variables for /etc/puppet/modules/redis/templates/redis.conf.3.2.erb in 0.01 seconds", > "Debug: template[/etc/puppet/modules/redis/templates/redis.conf.3.2.erb]: Interpolated template /etc/puppet/modules/redis/templates/redis.conf.3.2.erb in 0.01 seconds", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[redis] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property-controller-0-redis-role] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_bundle[redis-bundle] with 'before'", > "Debug: Adding relationship from Class[Redis::Preinstall] to Class[Redis::Install] with 'before'", > "Debug: Adding relationship from Class[Redis::Install] to Class[Redis::Config] with 'before'", > "Debug: File[/etc/redis]: Adding default for owner", > "Debug: File[/etc/redis]: Adding default for group", > "Debug: File[/etc/systemd/system/redis.service.d/]: Adding default for mode", > "Debug: File[/etc/redis.conf.puppet]: Adding default for owner", > "Debug: File[/etc/redis.conf.puppet]: Adding default for group", > "Debug: File[/etc/redis.conf.puppet]: Adding default for mode", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.46 seconds", > "Info: Applying configuration version '1537533261'", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[redis]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property-controller-0-redis-role]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_bundle[redis-bundle]", > "Debug: /Stage[main]/Redis::Preinstall/before: subscribes to Class[Redis::Install]", > "Debug: /Stage[main]/Redis::Install/before: subscribes to Class[Redis::Config]", > "Debug: /Stage[main]/Redis::Ulimit/Augeas[Systemd redis ulimit]/notify: subscribes to Exec[systemd-reload-redis]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Property[redis-role-controller-0]/before: subscribes to Pacemaker::Resource::Bundle[redis-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Resource::Ocf[redis]/require: subscribes to Pacemaker::Resource::Bundle[redis-bundle]", > "Debug: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]/subscribe: subscribes to File[/etc/redis.conf.puppet]", > "Debug: /Stage[main]/Redis::Ulimit/File[/etc/systemd/system/redis.service.d/limit.conf]: Adding autorequire relationship with File[/etc/systemd/system/redis.service.d/]", > "Debug: Stage[main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Settings]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Systemd::Unit_file[docker.service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Stonith]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Property[Disable STONITH]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Resource_defaults]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Pacemaker::Database::Redis_bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Pacemaker::Database::Redis_bundle]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis::Preinstall]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Preinstall]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Install]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Redis::Install/Package[redis]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Redis::Install/Package[redis]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis::Config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Config]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Redis::Config/File[/etc/redis]/ensure: created", > "Debug: /Stage[main]/Redis::Config/File[/etc/redis]: The container Class[Redis::Config] will propagate my refresh event", > "Notice: /Stage[main]/Redis::Config/File[/var/log/redis]/mode: mode changed '0750' to '0755'", > "Debug: /Stage[main]/Redis::Config/File[/var/log/redis]: The container Class[Redis::Config] will propagate my refresh event", > "Notice: /Stage[main]/Redis::Config/File[/var/lib/redis]/mode: mode changed '0750' to '0755'", > "Debug: /Stage[main]/Redis::Config/File[/var/lib/redis]: The container Class[Redis::Config] will propagate my refresh event", > "Debug: Redis::Instance[default]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Redis::Instance[default]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis::Ulimit]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Ulimit]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/security/limits.d/redis.conf]/ensure: defined content as '{md5}a2f723773964f5ea42b6c7c5d6b72208'", > "Debug: /Stage[main]/Redis::Ulimit/File[/etc/security/limits.d/redis.conf]: The container Class[Redis::Ulimit] will propagate my refresh event", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/systemd/system/redis.service.d/limit.conf]/mode: mode changed '0644' to '0444'", > "Debug: /Stage[main]/Redis::Ulimit/File[/etc/systemd/system/redis.service.d/limit.conf]: The container Class[Redis::Ulimit] will propagate my refresh event", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): sending command 'defnode' with params [\"nofile\", \"/etc/systemd/system/redis.service.d/limits.conf/Service/LimitNOFILE\", \"\"]", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): sending command 'set' with params [\"$nofile/value\", \"10240\"]", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): Skipping because no files were changed", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): Closed the augeas connection", > "Info: Class[Redis::Ulimit]: Unscheduling all events on Class[Redis::Ulimit]", > "Debug: Class[Redis::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Service]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Redis/Exec[systemd-reload-redis]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Redis/Exec[systemd-reload-redis]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[redis-role-controller-0]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Property[redis-role-controller-0]: Resource is being skipped, unscheduling all events", > "Debug: Class[Systemd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1aktr5o returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1aktr5o property show | grep stonith-enabled | grep false > /dev/null 2>&1", > "Notice: /Stage[main]/Redis::Config/Redis::Instance[default]/File[/etc/redis.conf.puppet]/ensure: defined content as '{md5}97b1dbb27fe87d869eb0f6421f1b8aff'", > "Info: /Stage[main]/Redis::Config/Redis::Instance[default]/File[/etc/redis.conf.puppet]: Scheduling refresh of Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]", > "Debug: /Stage[main]/Redis::Config/Redis::Instance[default]/File[/etc/redis.conf.puppet]: The container Redis::Instance[default] will propagate my refresh event", > "Debug: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]: Resource is being skipped, unscheduling all events", > "Info: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]: Unscheduling all events on Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]", > "Info: Redis::Instance[default]: Unscheduling all events on Redis::Instance[default]", > "Info: Class[Redis::Config]: Unscheduling all events on Class[Redis::Config]" >, > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1ep687l returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1ep687l property show | grep redis-role | grep controller-0 | grep true > /dev/null 2>&1", > "Debug: property exists: property show | grep redis-role | grep controller-0 | grep true > /dev/null 2>&1 -> false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1gtormp returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1gtormp property set --node controller-0 redis-role=true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1gtormp diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1gtormp.orig returned 0 -> CIB updated", > "Debug: property create: property set --node controller-0 redis-role=true -> ", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Property[redis-role-controller-0]/Pcmk_property[property-controller-0-redis-role]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Property[redis-role-controller-0]/Pcmk_property[property-controller-0-redis-role]: The container Pacemaker::Property[redis-role-controller-0] will propagate my refresh event", > "Info: Pacemaker::Property[redis-role-controller-0]: Unscheduling all events on Pacemaker::Property[redis-role-controller-0]", > "Debug: Pacemaker::Resource::Bundle[redis-bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Bundle[redis-bundle]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-6902nk returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-6902nk constraint list | grep location-redis-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-lgchq0 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-lgchq0 resource show redis-bundle > /dev/null 2>&1", > "Debug: Exists: bundle redis-bundle exists 1 location exists 1 deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-127uw0u returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-127uw0u resource bundle create redis-bundle container docker image=192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest replicas=1 masters=1 options=\"--user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" run-command=\"/bin/bash /usr/local/bin/kolla_start\" network=host storage-map id=redis-cfg-files source-dir=/var/lib/kolla/config_files/redis.json target-dir=/var/lib/kolla/config_files/config.json options=ro storage-map id=redis-cfg-data-redis source-dir=/var/lib/config-data/puppet-generated/redis/ target-dir=/var/lib/kolla/config_files/src options=ro storage-map id=redis-hosts source-dir=/etc/hosts target-dir=/etc/hosts options=ro storage-map id=redis-localtime source-dir=/etc/localtime target-dir=/etc/localtime options=ro storage-map id=redis-lib source-dir=/var/lib/redis target-dir=/var/lib/redis options=rw storage-map id=redis-log source-dir=/var/log/containers/redis target-dir=/var/log/redis options=rw storage-map id=redis-run source-dir=/var/run/redis target-dir=/var/run/redis options=rw storage-map id=redis-pki-extracted source-dir=/etc/pki/ca-trust/extracted target-dir=/etc/pki/ca-trust/extracted options=ro storage-map id=redis-pki-ca-bundle-crt source-dir=/etc/pki/tls/certs/ca-bundle.crt target-dir=/etc/pki/tls/certs/ca-bundle.crt options=ro storage-map id=redis-pki-ca-bundle-trust-crt source-dir=/etc/pki/tls/certs/ca-bundle.trust.crt target-dir=/etc/pki/tls/certs/ca-bundle.trust.crt options=ro storage-map id=redis-pki-cert source-dir=/etc/pki/tls/cert.pem target-dir=/etc/pki/tls/cert.pem options=ro storage-map id=redis-dev-log source-dir=/dev/log target-dir=/dev/log options=rw network control-port=3124 --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-127uw0u diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-127uw0u.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location redis-bundle rule resource-discovery=exclusive score=0 redis-role eq true", > "Debug: location_rule_create: constraint location redis-bundle rule resource-discovery=exclusive score=0 redis-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-15bb83s returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-15bb83s constraint location redis-bundle rule resource-discovery=exclusive score=0 redis-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-15bb83s diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-15bb83s.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-3aoxsg returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-3aoxsg resource enable redis-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-3aoxsg diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-3aoxsg.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Resource::Bundle[redis-bundle]/Pcmk_bundle[redis-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Resource::Bundle[redis-bundle]/Pcmk_bundle[redis-bundle]: The container Pacemaker::Resource::Bundle[redis-bundle] will propagate my refresh event", > "Info: Pacemaker::Resource::Bundle[redis-bundle]: Unscheduling all events on Pacemaker::Resource::Bundle[redis-bundle]", > "Debug: Pacemaker::Resource::Ocf[redis]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ocf[redis]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1yzxym4 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1yzxym4 constraint list | grep location-redis-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1nu5625 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1nu5625 resource show redis > /dev/null 2>&1", > "Debug: Exists: resource redis exists 1 location exists 0 resource deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-dc59if returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-dc59if resource create redis ocf:heartbeat:redis wait_last_known_master=true meta notify=true ordered=true interleave=true container-attribute-target=host op start timeout=200s stop timeout=200s bundle redis-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-dc59if diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-dc59if.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Resource::Ocf[redis]/Pcmk_resource[redis]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Resource::Ocf[redis]/Pcmk_resource[redis]: The container Pacemaker::Resource::Ocf[redis] will propagate my refresh event", > "Info: Pacemaker::Resource::Ocf[redis]: Unscheduling all events on Pacemaker::Resource::Ocf[redis]", > "Debug: /Schedule[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[hourly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[daily]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[weekly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[monthly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[never]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Filebucket[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Finishing transaction 33489760", > "Notice: Applied catalog in 40.09 seconds", > " Total: 13", > " Success: 13", > " Changed: 13", > " Out of sync: 13", > " Total: 42", > " Augeas: 0.01", > " File: 0.02", > " Config retrieval: 1.60", > " Pcmk resource: 11.29", > " Last run: 1537533302", > " Pcmk bundle: 19.11", > " Total: 41.48", > " Pcmk property: 9.45", > " Config: 1537533261", > "Debug: Finishing transaction 34604560", > "+ TAGS=file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", > "+ CONFIG='include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle'", > "+ puppet apply --debug --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation -e 'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle'", > "Warning: ModuleLoader: module 'redis' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/pacemaker/haproxy_bundle.pp' in environment production", > "Debug: Automatically imported tripleo::profile::pacemaker::haproxy_bundle from tripleo/profile/pacemaker/haproxy_bundle into production", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::haproxy_docker_image in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::enable_load_balancer in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::ca_bundle in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::crl_file in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::internal_certs_directory in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::internal_keys_directory in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::deployed_ssl_cert_path in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::pcs_tries in JSON backend", > "Debug: hiera(): Looking up haproxy_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up enable_load_balancer in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ca_bundle in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::crl_file in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::service_certificate in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/haproxy.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::haproxy from tripleo/profile/base/haproxy into production", > "Debug: hiera(): Looking up tripleo::profile::base::haproxy::certificates_specs in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::haproxy::enable_load_balancer in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::haproxy::manage_firewall in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::haproxy::step in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::manage_firewall in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/haproxy.pp' in environment production", > "Debug: Automatically imported tripleo::haproxy from tripleo/haproxy into production", > "Debug: hiera(): Looking up tripleo::haproxy::controller_virtual_ip in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::public_virtual_ip in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_service_manage in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_global_maxconn in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_default_maxconn in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_default_timeout in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_listen_bind_param in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_member_options in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_log_address in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::activate_httplog in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_globals_override in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_defaults_override in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_daemon in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_socket_access_level in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_stats_user in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_stats_password in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::controller_hosts in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::controller_hosts_names in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::use_internal_certificates in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ssl_cipher_suite in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ssl_options in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_stats_certificate in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_stats in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_admin in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_public in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::neutron in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::cinder in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::congress in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::manila in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::sahara in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::tacker in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::trove in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::glance_api in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_osapi in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_placement in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_metadata in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_novncproxy in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ec2_api in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ec2_api_metadata in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::aodh in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::panko in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::barbican in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::gnocchi in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mistral in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::swift_proxy_server in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::heat_api in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::heat_cfn in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::horizon in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ironic in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ironic_inspector in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::octavia in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::designate in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mysql in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::kubernetes_master in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mysql_clustercheck in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mysql_max_conn in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mysql_member_options in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::openshift_master in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::rabbitmq in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::etcd in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::docker_registry in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::redis in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::redis_password in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::midonet_api in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::zaqar_api in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ceph_rgw in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::opendaylight in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ovn_dbs in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ovn_dbs_manage_lb in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::zaqar_ws in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ui in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::aodh_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::barbican_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ceph_rgw_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::cinder_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::congress_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::designate_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::docker_registry_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::glance_api_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::gnocchi_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::heat_api_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::heat_cfn_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::horizon_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ironic_inspector_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ironic_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::kubernetes_master_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_admin_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_public_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_sticky_sessions in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_session_cookie in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::manila_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mistral_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::neutron_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_metadata_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_novncproxy_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_osapi_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_placement_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::octavia_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::opendaylight_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::openshift_master_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::panko_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ovn_dbs_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ec2_api_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ec2_api_metadata_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::etcd_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::sahara_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::swift_proxy_server_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::tacker_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::trove_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::zaqar_api_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::service_ports in JSON backend", > "Debug: hiera(): Looking up controller_node_ips in JSON backend", > "Debug: hiera(): Looking up controller_node_names in JSON backend", > "Debug: hiera(): Looking up nova_vnc_proxy_enabled in JSON backend", > "Debug: hiera(): Looking up swift_proxy_enabled in JSON backend", > "Debug: hiera(): Looking up heat_api_enabled in JSON backend", > "Debug: hiera(): Looking up heat_api_cfn_enabled in JSON backend", > "Debug: hiera(): Looking up horizon_enabled in JSON backend", > "Debug: hiera(): Looking up mysql_enabled in JSON backend", > "Debug: hiera(): Looking up kubernetes_master_enabled in JSON backend", > "Debug: hiera(): Looking up openshift_master_enabled in JSON backend", > "Debug: hiera(): Looking up etcd_enabled in JSON backend", > "Debug: hiera(): Looking up enable_docker_registry in JSON backend", > "Debug: hiera(): Looking up redis_enabled in JSON backend", > "Debug: hiera(): Looking up ceph_rgw_enabled in JSON backend", > "Debug: hiera(): Looking up opendaylight_api_enabled in JSON backend", > "Debug: hiera(): Looking up ovn_dbs_enabled in JSON backend", > "Debug: hiera(): Looking up tripleo_ui_enabled in JSON backend", > "Debug: hiera(): Looking up enable_ui in JSON backend", > "Debug: hiera(): Looking up aodh_api_network in JSON backend", > "Debug: hiera(): Looking up barbican_api_network in JSON backend", > "Debug: hiera(): Looking up ceph_rgw_network in JSON backend", > "Debug: hiera(): Looking up cinder_api_network in JSON backend", > "Debug: hiera(): Looking up congress_api_network in JSON backend", > "Debug: hiera(): Looking up designate_api_network in JSON backend", > "Debug: hiera(): Looking up docker_registry_network in JSON backend", > "Debug: hiera(): Looking up glance_api_network in JSON backend", > "Debug: hiera(): Looking up gnocchi_api_network in JSON backend", > "Debug: hiera(): Looking up heat_api_network in JSON backend", > "Debug: hiera(): Looking up heat_api_cfn_network in JSON backend", > "Debug: hiera(): Looking up horizon_network in JSON backend", > "Debug: hiera(): Looking up ironic_inspector_network in JSON backend", > "Debug: hiera(): Looking up ironic_api_network in JSON backend", > "Debug: hiera(): Looking up kubernetes_master_network in JSON backend", > "Debug: hiera(): Looking up keystone_admin_api_network in JSON backend", > "Debug: hiera(): Looking up keystone_public_api_network in JSON backend", > "Debug: hiera(): Looking up keystone_sticky_sessions in JSON backend", > "Debug: hiera(): Looking up keystone_session_cookie, in JSON backend", > "Debug: hiera(): Looking up manila_api_network in JSON backend", > "Debug: hiera(): Looking up mistral_api_network in JSON backend", > "Debug: hiera(): Looking up neutron_api_network in JSON backend", > "Debug: hiera(): Looking up nova_api_network in JSON backend", > "Debug: hiera(): Looking up nova_vnc_proxy_network in JSON backend", > "Debug: hiera(): Looking up nova_placement_network in JSON backend", > "Debug: hiera(): Looking up octavia_api_network in JSON backend", > "Debug: hiera(): Looking up opendaylight_api_network in JSON backend", > "Debug: hiera(): Looking up openshift_master_network in JSON backend", > "Debug: hiera(): Looking up panko_api_network in JSON backend", > "Debug: hiera(): Looking up ovn_dbs_network in JSON backend", > "Debug: hiera(): Looking up ec2_api_network in JSON backend", > "Debug: hiera(): Looking up etcd_network in JSON backend", > "Debug: hiera(): Looking up sahara_api_network in JSON backend", > "Debug: hiera(): Looking up swift_proxy_network in JSON backend", > "Debug: hiera(): Looking up tacker_api_network in JSON backend", > "Debug: hiera(): Looking up trove_api_network in JSON backend", > "Debug: hiera(): Looking up zaqar_api_network in JSON backend", > "Debug: hiera(): Looking up mysql_vip in JSON backend", > "Debug: hiera(): Looking up rabbitmq_vip in JSON backend", > "Debug: hiera(): Looking up redis_vip in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/init.pp' in environment production", > "Debug: Automatically imported haproxy from haproxy into production", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/params.pp' in environment production", > "Debug: Automatically imported haproxy::params from haproxy/params into production", > "Debug: hiera(): Looking up haproxy::package_ensure in JSON backend", > "Debug: hiera(): Looking up haproxy::package_name in JSON backend", > "Debug: hiera(): Looking up haproxy::service_ensure in JSON backend", > "Debug: hiera(): Looking up haproxy::service_options in JSON backend", > "Debug: hiera(): Looking up haproxy::sysconfig_options in JSON backend", > "Debug: hiera(): Looking up haproxy::merge_options in JSON backend", > "Debug: hiera(): Looking up haproxy::restart_command in JSON backend", > "Debug: hiera(): Looking up haproxy::custom_fragment in JSON backend", > "Debug: hiera(): Looking up haproxy::config_dir in JSON backend", > "Debug: hiera(): Looking up haproxy::config_file in JSON backend", > "Debug: hiera(): Looking up haproxy::manage_config_dir in JSON backend", > "Debug: hiera(): Looking up haproxy::config_validate_cmd in JSON backend", > "Debug: hiera(): Looking up haproxy::manage_service in JSON backend", > "Debug: hiera(): Looking up haproxy::enable in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/instance.pp' in environment production", > "Debug: Automatically imported haproxy::instance from haproxy/instance into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/haproxy/endpoint.pp' in environment production", > "Debug: Automatically imported tripleo::haproxy::endpoint from tripleo/haproxy/endpoint into production", > "Debug: hiera(): Looking up enabled_services in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/haproxy/service_endpoints.pp' in environment production", > "Debug: Automatically imported tripleo::haproxy::service_endpoints from tripleo/haproxy/service_endpoints into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/haproxy/stats.pp' in environment production", > "Debug: Automatically imported tripleo::haproxy::stats from tripleo/haproxy/stats into production", > "Debug: hiera(): Looking up tripleo::haproxy::stats::certificate in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/listen.pp' in environment production", > "Debug: Automatically imported haproxy::listen from haproxy/listen into production", > "Debug: hiera(): Looking up keystone_admin_api_vip in JSON backend", > "Debug: hiera(): Looking up keystone_admin_api_node_ips in JSON backend", > "Debug: hiera(): Looking up keystone_admin_api_node_names in JSON backend", > "Debug: hiera(): Looking up keystone_public_api_vip in JSON backend", > "Debug: hiera(): Looking up keystone_public_api_node_ips in JSON backend", > "Debug: hiera(): Looking up keystone_public_api_node_names in JSON backend", > "Debug: hiera(): Looking up neutron_api_vip in JSON backend", > "Debug: hiera(): Looking up neutron_api_node_ips in JSON backend", > "Debug: hiera(): Looking up neutron_api_node_names in JSON backend", > "Debug: hiera(): Looking up cinder_api_vip in JSON backend", > "Debug: hiera(): Looking up cinder_api_node_ips in JSON backend", > "Debug: hiera(): Looking up cinder_api_node_names in JSON backend", > "Debug: hiera(): Looking up sahara_api_vip in JSON backend", > "Debug: hiera(): Looking up sahara_api_node_ips in JSON backend", > "Debug: hiera(): Looking up sahara_api_node_names in JSON backend", > "Debug: hiera(): Looking up glance_api_vip in JSON backend", > "Debug: hiera(): Looking up glance_api_node_ips in JSON backend", > "Debug: hiera(): Looking up glance_api_node_names in JSON backend", > "Debug: hiera(): Looking up nova_api_vip in JSON backend", > "Debug: hiera(): Looking up nova_api_node_ips in JSON backend", > "Debug: hiera(): Looking up nova_api_node_names in JSON backend", > "Debug: hiera(): Looking up nova_placement_vip in JSON backend", > "Debug: hiera(): Looking up nova_placement_node_ips in JSON backend", > "Debug: hiera(): Looking up nova_placement_node_names in JSON backend", > "Debug: hiera(): Looking up nova_metadata_vip in JSON backend", > "Debug: hiera(): Looking up nova_metadata_node_ips in JSON backend", > "Debug: hiera(): Looking up nova_metadata_node_names in JSON backend", > "Debug: hiera(): Looking up aodh_api_vip in JSON backend", > "Debug: hiera(): Looking up aodh_api_node_ips in JSON backend", > "Debug: hiera(): Looking up aodh_api_node_names in JSON backend", > "Debug: hiera(): Looking up panko_api_vip in JSON backend", > "Debug: hiera(): Looking up panko_api_node_ips in JSON backend", > "Debug: hiera(): Looking up panko_api_node_names in JSON backend", > "Debug: hiera(): Looking up gnocchi_api_vip in JSON backend", > "Debug: hiera(): Looking up gnocchi_api_node_ips in JSON backend", > "Debug: hiera(): Looking up gnocchi_api_node_names in JSON backend", > "Debug: hiera(): Looking up swift_proxy_vip in JSON backend", > "Debug: hiera(): Looking up swift_proxy_node_ips in JSON backend", > "Debug: hiera(): Looking up swift_proxy_node_names in JSON backend", > "Debug: hiera(): Looking up heat_api_vip in JSON backend", > "Debug: hiera(): Looking up heat_api_node_ips in JSON backend", > "Debug: hiera(): Looking up heat_api_node_names in JSON backend", > "Debug: hiera(): Looking up horizon_vip in JSON backend", > "Debug: hiera(): Looking up horizon_node_ips in JSON backend", > "Debug: hiera(): Looking up horizon_node_names in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/haproxy/horizon_endpoint.pp' in environment production", > "Debug: Automatically imported tripleo::haproxy::horizon_endpoint from tripleo/haproxy/horizon_endpoint into production", > "Debug: hiera(): Looking up tripleo::haproxy::horizon_endpoint::public_certificate in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::horizon::options in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/balancermember.pp' in environment production", > "Debug: Automatically imported haproxy::balancermember from haproxy/balancermember into production", > "Debug: hiera(): Looking up mysql_node_ips in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall.pp' in environment production", > "Debug: Automatically imported tripleo::firewall from tripleo/firewall into production", > "Debug: hiera(): Looking up tripleo::firewall::firewall_chains in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::purge_firewall_chains in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::purge_firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_pre_extras in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_post_extras in JSON backend", > "Debug: Resource class[tripleo::firewall::pre] was not determined to be defined", > "Debug: Create new resource class[tripleo::firewall::pre] with params {\"firewall_settings\"=>{}}", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/pre.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::pre from tripleo/firewall/pre into production", > "Debug: importing '/etc/puppet/modules/firewall/manifests/init.pp' in environment production", > "Debug: Automatically imported firewall from firewall into production", > "Debug: importing '/etc/puppet/modules/firewall/manifests/params.pp' in environment production", > "Debug: Automatically imported firewall::params from firewall/params into production", > "Debug: hiera(): Looking up firewall::ensure in JSON backend", > "Debug: hiera(): Looking up firewall::ensure_v6 in JSON backend", > "Debug: hiera(): Looking up firewall::pkg_ensure in JSON backend", > "Debug: hiera(): Looking up firewall::service_name in JSON backend", > "Debug: hiera(): Looking up firewall::service_name_v6 in JSON backend", > "Debug: hiera(): Looking up firewall::package_name in JSON backend", > "Debug: hiera(): Looking up firewall::ebtables_manage in JSON backend", > "Debug: importing '/etc/puppet/modules/firewall/manifests/linux.pp' in environment production", > "Debug: Automatically imported firewall::linux from firewall/linux into production", > "Debug: importing '/etc/puppet/modules/firewall/manifests/linux/redhat.pp' in environment production", > "Debug: Automatically imported firewall::linux::redhat from firewall/linux/redhat into production", > "Debug: hiera(): Looking up firewall::linux::redhat::package_ensure in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/rule.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::rule from tripleo/firewall/rule into production", > "Debug: Resource class[tripleo::firewall::post] was not determined to be defined", > "Debug: Create new resource class[tripleo::firewall::post] with params {\"firewall_settings\"=>{}}", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/post.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::post from tripleo/firewall/post into production", > "Debug: hiera(): Looking up tripleo::firewall::post::debug in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::post::logging_settings in JSON backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Debug: hiera(): Looking up service_names in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/service_rules.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::service_rules from tripleo/firewall/service_rules into production", > "Debug: hiera(): Looking up redis_node_ips in JSON backend", > "Debug: hiera(): Looking up redis_node_names in JSON backend", > "Debug: hiera(): Looking up midonet_cluster_vip in JSON backend", > "Debug: hiera(): Looking up haproxy_short_node_names in JSON backend", > "Debug: hiera(): Looking up controller_virtual_ip in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/pacemaker/haproxy_with_vip.pp' in environment production", > "Debug: Automatically imported tripleo::pacemaker::haproxy_with_vip from tripleo/pacemaker/haproxy_with_vip into production", > "Debug: hiera(): Looking up public_virtual_ip in JSON backend", > "Debug: hiera(): Looking up network_virtual_ips in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/config.pp' in environment production", > "Debug: Automatically imported haproxy::config from haproxy/config into production", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/install.pp' in environment production", > "Debug: Automatically imported haproxy::install from haproxy/install into production", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/service.pp' in environment production", > "Debug: Automatically imported haproxy::service from haproxy/service into production", > "Debug: hiera(): Looking up tripleo.aodh_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_evaluator.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_evaluator.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_evaluator::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_evaluator::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_listener.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_listener.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_listener::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_listener::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_notifier.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_notifier.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_notifier::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_notifier::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ca_certs.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ca_certs.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ca_certs::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ca_certs::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_central.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_central.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_central::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_central::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_notification.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_notification.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_notification::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_notification::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mgr.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mgr.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mgr::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mgr::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mon.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mon.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mon::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mon::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.certmonger_user.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.certmonger_user.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::certmonger_user::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::certmonger_user::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_backup.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_backup.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_backup::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_backup::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_scheduler.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_scheduler.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_scheduler::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_scheduler::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_volume.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_volume.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_volume::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_volume::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.clustercheck.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.clustercheck.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::clustercheck::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::clustercheck::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.container_image_prepare.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.container_image_prepare.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::container_image_prepare::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::container_image_prepare::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.docker.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.docker.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::docker::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::docker::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_registry_disabled.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_registry_disabled.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_registry_disabled::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_registry_disabled::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_metricd.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_metricd.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_metricd::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_metricd::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_statsd.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_statsd.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_statsd::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_statsd::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.haproxy.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.haproxy.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cloudwatch_disabled.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cloudwatch_disabled.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cloudwatch_disabled::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cloudwatch_disabled::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cfn.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cfn.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cfn::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cfn::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_engine.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_engine.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_engine::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_engine::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.horizon.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.horizon.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::horizon::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::horizon::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.iscsid.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.iscsid.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::iscsid::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::iscsid::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.kernel.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.kernel.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::kernel::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::kernel::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.keystone.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.keystone.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::keystone::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::keystone::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.memcached.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.memcached.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::memcached::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::memcached::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.mongodb_disabled.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.mongodb_disabled.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::mongodb_disabled::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::mongodb_disabled::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql_client.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql_client.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql_client::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql_client::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_plugin_ml2.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_plugin_ml2.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_plugin_ml2::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_plugin_ml2::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_dhcp.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_dhcp.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_dhcp::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_dhcp::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_l3.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_l3.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_l3::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_l3::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_metadata.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_metadata.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_metadata::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_metadata::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_ovs_agent.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_ovs_agent.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_ovs_agent::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_ovs_agent::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_conductor.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_conductor.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_conductor::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_conductor::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_consoleauth.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_consoleauth.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_consoleauth::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_consoleauth::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_metadata.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_metadata.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_metadata::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_metadata::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_placement.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_placement.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_placement::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_placement::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_scheduler.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_scheduler.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_scheduler::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_scheduler::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_vnc_proxy.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_vnc_proxy.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_vnc_proxy::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_vnc_proxy::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ntp.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ntp.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ntp::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ntp::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.logrotate_crond.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.logrotate_crond.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::logrotate_crond::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::logrotate_crond::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.pacemaker.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.pacemaker.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::pacemaker::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::pacemaker::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.panko_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.panko_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::panko_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::panko_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_rpc.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_rpc.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_rpc::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_rpc::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_notify.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_notify.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_notify::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_notify::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.redis.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.redis.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::redis::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::redis::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_engine.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_engine.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_engine::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_engine::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.snmp.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.snmp.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::snmp::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::snmp::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.sshd.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.sshd.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::sshd::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::sshd::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_proxy.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_proxy.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_proxy::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_proxy::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_ringbuilder.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_ringbuilder.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_ringbuilder::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_ringbuilder::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_storage.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_storage.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_storage::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_storage::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.timezone.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.timezone.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::timezone::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::timezone::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_firewall.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_firewall.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_firewall::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_firewall::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_packages.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_packages.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_packages::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_packages::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.tuned.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.tuned.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::tuned::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::tuned::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.xinetd.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.xinetd.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::xinetd::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::xinetd::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_client.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_client.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_client::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_client::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_compute.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_compute.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_compute::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_compute::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_compute.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_compute.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_compute::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_compute::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_libvirt.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_libvirt.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_libvirt::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_libvirt::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_libvirt_guests.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_libvirt_guests.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_libvirt_guests::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_libvirt_guests::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_migration_target.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_migration_target.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_migration_target::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_migration_target::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_osd.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_osd.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_osd::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_osd::haproxy_userlists in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/backend.pp' in environment production", > "Debug: Automatically imported haproxy::backend from haproxy/backend into production", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/globals.pp' in environment production", > "Debug: Automatically imported haproxy::globals from haproxy/globals into production", > "Debug: hiera(): Looking up haproxy::globals::sort_options_alphabetic in JSON backend", > "Debug: Scope(Haproxy::Listen[haproxy.stats]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Listen[haproxy.stats]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_bind.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/fragments/_bind.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_bind.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/fragments/_bind.erb in 0.01 seconds", > "Debug: Scope(Haproxy::Listen[haproxy.stats]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_mode.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/fragments/_mode.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_mode.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/fragments/_mode.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Listen[haproxy.stats]): Retrieving template haproxy/fragments/_options.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_options.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/fragments/_options.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_options.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/fragments/_options.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.01 seconds", > "Debug: importing '/etc/puppet/modules/concat/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/concat/manifests/fragment.pp' in environment production", > "Debug: Automatically imported concat::fragment from concat/fragment into production", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_admin::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_public::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::neutron::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::cinder::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for member_options", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::sahara::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::glance_api::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::nova_osapi::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::nova_placement::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::nova_metadata::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::nova_novncproxy::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::aodh::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::panko::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::gnocchi::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::swift_proxy_server::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::heat_api::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::heat_cfn::options in JSON backend", > "Debug: Scope(Haproxy::Listen[horizon]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[horizon]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[horizon]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[horizon]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[horizon_172.17.1.17_controller-0.internalapi.localdomain]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_balancermember.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/haproxy_balancermember.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_balancermember.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_balancermember.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Listen[mysql]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[mysql]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_bind.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/fragments/_bind.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Listen[mysql]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[mysql]): Retrieving template haproxy/fragments/_options.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Balancermember[mysql-backup]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: hiera(): Looking up tripleo.aodh_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_evaluator.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_evaluator::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_listener.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_listener::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_notifier.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_notifier::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ca_certs.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ca_certs::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_central.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_central::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_notification.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_notification::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mgr.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mgr::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mon.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mon::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.certmonger_user.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::certmonger_user::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_backup.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_backup::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_scheduler.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_scheduler::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_volume.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_volume::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.clustercheck.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::clustercheck::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.container_image_prepare.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::container_image_prepare::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.docker.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::docker::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_registry_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_registry_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_metricd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_metricd::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_statsd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_statsd::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.haproxy.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cloudwatch_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cloudwatch_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cfn.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cfn::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_engine.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_engine::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.horizon.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::horizon::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.iscsid.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::iscsid::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.kernel.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::kernel::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.keystone.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::keystone::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.memcached.firewall_rules in JSON backend", > "Debug: hiera(): Looking up memcached_network in JSON backend", > "Debug: hiera(): Looking up internal_api_subnet in JSON backend", > "Debug: hiera(): Looking up tripleo::memcached::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.mongodb_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::mongodb_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql_client.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql_client::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_plugin_ml2.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_plugin_ml2::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_dhcp.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_dhcp::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_l3.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_l3::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_metadata.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_metadata::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_ovs_agent.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_ovs_agent::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_conductor.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_conductor::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_consoleauth.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_consoleauth::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_metadata.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_metadata::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_placement.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_placement::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_scheduler.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_scheduler::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_vnc_proxy.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_vnc_proxy::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ntp.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ntp::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.logrotate_crond.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::logrotate_crond::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.pacemaker.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::pacemaker::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.panko_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::panko_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_rpc.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_rpc::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_notify.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_notify::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.redis.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::redis::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_engine.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_engine::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.snmp.firewall_rules in JSON backend", > "Debug: hiera(): Looking up snmpd_network in JSON backend", > "Debug: hiera(): Looking up ctrlplane_subnet in JSON backend", > "Debug: hiera(): Looking up tripleo::snmp::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.sshd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::sshd::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_proxy.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_proxy::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_ringbuilder.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_ringbuilder::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_storage.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_storage::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.timezone.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::timezone::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_firewall.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_firewall::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_packages.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_packages::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.tuned.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::tuned::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.xinetd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::xinetd::firewall_rules in JSON backend", > "Debug: Scope(Haproxy::Listen[redis]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[redis]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[redis]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[redis]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[redis]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: hiera(): Looking up haproxy_docker in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/resource/ip.pp' in environment production", > "Debug: Automatically imported pacemaker::resource::ip from pacemaker/resource/ip into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/constraint/order.pp' in environment production", > "Debug: Automatically imported pacemaker::constraint::order from pacemaker/constraint/order into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/constraint/colocation.pp' in environment production", > "Debug: Automatically imported pacemaker::constraint::colocation from pacemaker/constraint/colocation into production", > "Debug: Scope(Haproxy::Config[haproxy]): Retrieving template haproxy/haproxy-base.cfg.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy-base.cfg.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/haproxy-base.cfg.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy-base.cfg.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy-base.cfg.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Listen[keystone_admin]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[keystone_admin]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[keystone_admin]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[keystone_admin]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[keystone_admin]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[keystone_public]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[keystone_public]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[keystone_public]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[keystone_public]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[keystone_public]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[neutron]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[neutron]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[neutron]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[neutron]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[neutron]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[cinder]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[cinder]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[cinder]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[cinder]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[cinder]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[sahara]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[sahara]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[sahara]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[sahara]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[sahara]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[glance_api]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[glance_api]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[glance_api]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[glance_api]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[glance_api]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[nova_osapi]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[nova_osapi]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[nova_osapi]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[nova_osapi]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[nova_osapi]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[nova_placement]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[nova_placement]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[nova_placement]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[nova_placement]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[nova_placement]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[nova_metadata]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[nova_metadata]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[nova_metadata]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[nova_metadata]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[nova_metadata]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[nova_novncproxy]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[nova_novncproxy]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[nova_novncproxy]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[nova_novncproxy]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[nova_novncproxy]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[aodh]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[aodh]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[aodh]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[aodh]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[aodh]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[panko]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[panko]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[panko]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[panko]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[panko]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[gnocchi]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[gnocchi]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[gnocchi]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[gnocchi]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[gnocchi]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[swift_proxy_server]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[swift_proxy_server]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[swift_proxy_server]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[swift_proxy_server]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[swift_proxy_server]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[heat_api]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[heat_api]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[heat_api]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[heat_api]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[heat_api]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[heat_cfn]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[heat_cfn]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[heat_cfn]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[heat_cfn]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[heat_cfn]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: hiera(): Looking up pacemaker::resource::ip::deep_compare in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource::ip::update_settle_secs in JSON backend", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-192.168.24.7-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-192.168.24.7-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-10.0.0.111-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-10.0.0.111-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-172.17.1.10-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-172.17.1.10-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-172.17.1.15-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-172.17.1.15-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-172.17.3.21-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-172.17.3.21-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-172.17.4.13-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-172.17.4.13-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-192.168.24.7] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-10.0.0.111] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-172.17.1.10] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-172.17.1.15] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-172.17.3.21] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-172.17.4.13] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property-controller-0-haproxy-role] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 mysql_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 mysql_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 redis_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 redis_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 keystone_admin_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 keystone_admin_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 keystone_public_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 keystone_public_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 keystone_public_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 keystone_public_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 neutron_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 neutron_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 neutron_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 neutron_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 cinder_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 cinder_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 cinder_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 cinder_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 sahara_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 sahara_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 sahara_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 sahara_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 glance_api_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 glance_api_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 glance_api_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 glance_api_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_osapi_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_osapi_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_osapi_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_osapi_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_placement_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_placement_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_placement_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_placement_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_metadata_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_metadata_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_novncproxy_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_novncproxy_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_novncproxy_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 nova_novncproxy_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 aodh_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 aodh_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 aodh_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 aodh_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 panko_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 panko_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 panko_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 panko_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 gnocchi_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 gnocchi_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 gnocchi_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 gnocchi_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 swift_proxy_server_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 swift_proxy_server_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 swift_proxy_server_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 swift_proxy_server_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 heat_api_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 heat_api_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 heat_api_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 heat_api_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 heat_cfn_haproxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 heat_cfn_haproxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 heat_cfn_haproxy_ssl ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[100 heat_cfn_haproxy_ssl ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[128 aodh-api ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[128 aodh-api ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[113 ceph_mgr ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[113 ceph_mgr ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[110 ceph_mon ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[110 ceph_mon ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[119 cinder ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[119 cinder ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[120 iscsi initiator ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[120 iscsi initiator ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[112 glance_api ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[112 glance_api ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[129 gnocchi-api ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[129 gnocchi-api ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[140 gnocchi-statsd ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[140 gnocchi-statsd ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[107 haproxy stats ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[107 haproxy stats ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[125 heat_api ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[125 heat_api ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[125 heat_cfn ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[125 heat_cfn ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[127 horizon ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[127 horizon ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[111 keystone ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[111 keystone ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[121 memcached ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[104 mysql galera-bundle ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[104 mysql galera-bundle ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[114 neutron api ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[114 neutron api ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[115 neutron dhcp input ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[115 neutron dhcp input ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[116 neutron dhcp output ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[116 neutron dhcp output ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[106 neutron_l3 vrrp ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[106 neutron_l3 vrrp ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[118 neutron vxlan networks ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[118 neutron vxlan networks ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[136 neutron gre networks ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[136 neutron gre networks ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[113 nova_api ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[113 nova_api ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[138 nova_placement ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[138 nova_placement ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[137 nova_vnc_proxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[137 nova_vnc_proxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[105 ntp ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[105 ntp ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[130 pacemaker tcp ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[130 pacemaker tcp ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[131 pacemaker udp ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[131 pacemaker udp ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[140 panko-api ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[140 panko-api ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[109 rabbitmq-bundle ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[109 rabbitmq-bundle ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[108 redis-bundle ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[108 redis-bundle ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[132 sahara ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[132 sahara ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[122 swift proxy ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[122 swift proxy ipv6] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[123 swift storage ipv4] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Firewall[123 swift storage ipv6] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[pcsd] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[corosync] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[pacemaker] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[firewalld] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[iptables] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[ip6tables] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Haproxy::Listen[haproxy.stats] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[horizon] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[mysql] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[redis] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[keystone_admin] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[keystone_public] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[neutron] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[cinder] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[sahara] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[glance_api] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[nova_osapi] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[nova_placement] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[nova_metadata] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[nova_novncproxy] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[aodh] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[panko] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[gnocchi] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[swift_proxy_server] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[heat_api] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[heat_cfn] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[horizon_172.17.1.17_controller-0.internalapi.localdomain] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[mysql-backup] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[redis] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[keystone_admin] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[keystone_public] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[neutron] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[cinder] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[sahara] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[glance_api] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[nova_osapi] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[nova_placement] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[nova_metadata] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[nova_novncproxy] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[aodh] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[panko] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[gnocchi] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[swift_proxy_server] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[heat_api] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[heat_cfn] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Anchor[haproxy::haproxy::begin] to Haproxy::Install[haproxy] with 'before'", > "Debug: Adding relationship from Haproxy::Install[haproxy] to Haproxy::Config[haproxy] with 'before'", > "Debug: Adding relationship from Haproxy::Config[haproxy] to Haproxy::Service[haproxy] with 'notify'", > "Debug: Adding relationship from Haproxy::Service[haproxy] to Anchor[haproxy::haproxy::end] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[control_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[control_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[control_vip-then-haproxy] to Pacemaker::Constraint::Colocation[control_vip-with-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[public_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[public_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[public_vip-then-haproxy] to Pacemaker::Constraint::Colocation[public_vip-with-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[redis_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[redis_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[redis_vip-then-haproxy] to Pacemaker::Constraint::Colocation[redis_vip-with-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[internal_api_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[internal_api_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[internal_api_vip-then-haproxy] to Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[storage_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[storage_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[storage_vip-then-haproxy] to Pacemaker::Constraint::Colocation[storage_vip-with-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[storage_mgmt_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy] to Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy] with 'before'", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.95 seconds", > "Debug: /Firewall[000 accept related established rules ipv4]: [validate]", > "Debug: /Firewall[000 accept related established rules ipv6]: [validate]", > "Debug: /Firewall[001 accept all icmp ipv4]: [validate]", > "Debug: /Firewall[001 accept all icmp ipv6]: [validate]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: [validate]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: [validate]", > "Debug: /Firewall[003 accept ssh ipv4]: [validate]", > "Debug: /Firewall[003 accept ssh ipv6]: [validate]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: [validate]", > "Debug: /Firewall[998 log all ipv4]: [validate]", > "Debug: /Firewall[998 log all ipv6]: [validate]", > "Debug: /Firewall[999 drop all ipv4]: [validate]", > "Debug: /Firewall[999 drop all ipv6]: [validate]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 redis_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 redis_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 panko_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 panko_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[128 aodh-api ipv4]: [validate]", > "Debug: /Firewall[128 aodh-api ipv6]: [validate]", > "Debug: /Firewall[113 ceph_mgr ipv4]: [validate]", > "Debug: /Firewall[113 ceph_mgr ipv6]: [validate]", > "Debug: /Firewall[110 ceph_mon ipv4]: [validate]", > "Debug: /Firewall[110 ceph_mon ipv6]: [validate]", > "Debug: /Firewall[119 cinder ipv4]: [validate]", > "Debug: /Firewall[119 cinder ipv6]: [validate]", > "Debug: /Firewall[120 iscsi initiator ipv4]: [validate]", > "Debug: /Firewall[120 iscsi initiator ipv6]: [validate]", > "Debug: /Firewall[112 glance_api ipv4]: [validate]", > "Debug: /Firewall[112 glance_api ipv6]: [validate]", > "Debug: /Firewall[129 gnocchi-api ipv4]: [validate]", > "Debug: /Firewall[129 gnocchi-api ipv6]: [validate]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: [validate]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: [validate]", > "Debug: /Firewall[107 haproxy stats ipv4]: [validate]", > "Debug: /Firewall[107 haproxy stats ipv6]: [validate]", > "Debug: /Firewall[125 heat_api ipv4]: [validate]", > "Debug: /Firewall[125 heat_api ipv6]: [validate]", > "Debug: /Firewall[125 heat_cfn ipv4]: [validate]", > "Debug: /Firewall[125 heat_cfn ipv6]: [validate]", > "Debug: /Firewall[127 horizon ipv4]: [validate]", > "Debug: /Firewall[127 horizon ipv6]: [validate]", > "Debug: /Firewall[111 keystone ipv4]: [validate]", > "Debug: /Firewall[111 keystone ipv6]: [validate]", > "Debug: /Firewall[121 memcached ipv4]: [validate]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: [validate]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: [validate]", > "Debug: /Firewall[114 neutron api ipv4]: [validate]", > "Debug: /Firewall[114 neutron api ipv6]: [validate]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: [validate]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: [validate]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: [validate]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: [validate]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: [validate]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: [validate]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: [validate]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: [validate]", > "Debug: /Firewall[136 neutron gre networks ipv4]: [validate]", > "Debug: /Firewall[136 neutron gre networks ipv6]: [validate]", > "Debug: /Firewall[113 nova_api ipv4]: [validate]", > "Debug: /Firewall[113 nova_api ipv6]: [validate]", > "Debug: /Firewall[138 nova_placement ipv4]: [validate]", > "Debug: /Firewall[138 nova_placement ipv6]: [validate]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: [validate]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: [validate]", > "Debug: /Firewall[105 ntp ipv4]: [validate]", > "Debug: /Firewall[105 ntp ipv6]: [validate]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: [validate]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: [validate]", > "Debug: /Firewall[131 pacemaker udp ipv4]: [validate]", > "Debug: /Firewall[131 pacemaker udp ipv6]: [validate]", > "Debug: /Firewall[140 panko-api ipv4]: [validate]", > "Debug: /Firewall[140 panko-api ipv6]: [validate]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: [validate]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: [validate]", > "Debug: /Firewall[108 redis-bundle ipv4]: [validate]", > "Debug: /Firewall[108 redis-bundle ipv6]: [validate]", > "Debug: /Firewall[132 sahara ipv4]: [validate]", > "Debug: /Firewall[132 sahara ipv6]: [validate]", > "Debug: /Firewall[122 swift proxy ipv4]: [validate]", > "Debug: /Firewall[122 swift proxy ipv6]: [validate]", > "Debug: /Firewall[123 swift storage ipv4]: [validate]", > "Debug: /Firewall[123 swift storage ipv6]: [validate]", > "Info: Applying configuration version '1537533308'", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-192.168.24.7-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-192.168.24.7-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-10.0.0.111-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-10.0.0.111-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-172.17.1.10-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-172.17.1.10-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-172.17.1.15-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-172.17.1.15-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-172.17.3.21-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-172.17.3.21-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-172.17.4.13-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-172.17.4.13-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-192.168.24.7]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-10.0.0.111]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-172.17.1.10]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-172.17.1.15]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-172.17.3.21]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-172.17.4.13]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property-controller-0-haproxy-role]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Haproxy::Stats/Haproxy::Listen[haproxy.stats]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy::Horizon_endpoint/Haproxy::Listen[horizon]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy::Horizon_endpoint/Haproxy::Balancermember[horizon_172.17.1.17_controller-0.internalapi.localdomain]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Haproxy::Listen[mysql]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Haproxy::Balancermember[mysql-backup]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 mysql_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 mysql_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 redis_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 redis_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 keystone_admin_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 keystone_admin_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 keystone_public_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 keystone_public_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 keystone_public_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 keystone_public_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 neutron_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 neutron_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 neutron_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 neutron_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 cinder_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 cinder_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 cinder_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 cinder_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 sahara_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 sahara_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 sahara_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 sahara_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 glance_api_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 glance_api_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 glance_api_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 glance_api_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_osapi_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_osapi_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_osapi_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_osapi_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_placement_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_placement_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_placement_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_placement_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_metadata_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_metadata_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_novncproxy_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_novncproxy_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_novncproxy_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 nova_novncproxy_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 aodh_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 aodh_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 aodh_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 aodh_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 panko_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 panko_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 panko_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 panko_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 gnocchi_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 gnocchi_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 gnocchi_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 gnocchi_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 swift_proxy_server_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 swift_proxy_server_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 swift_proxy_server_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 swift_proxy_server_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 heat_api_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 heat_api_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 heat_api_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 heat_api_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 heat_cfn_haproxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 heat_cfn_haproxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 heat_cfn_haproxy_ssl ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[100 heat_cfn_haproxy_ssl ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[128 aodh-api ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[128 aodh-api ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[113 ceph_mgr ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[113 ceph_mgr ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[110 ceph_mon ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[110 ceph_mon ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[119 cinder ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[119 cinder ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[120 iscsi initiator ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[120 iscsi initiator ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[112 glance_api ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[112 glance_api ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[129 gnocchi-api ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[129 gnocchi-api ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[140 gnocchi-statsd ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[140 gnocchi-statsd ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[107 haproxy stats ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[107 haproxy stats ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[125 heat_api ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[125 heat_api ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[125 heat_cfn ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[125 heat_cfn ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[127 horizon ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[127 horizon ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[111 keystone ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[111 keystone ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[121 memcached ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[104 mysql galera-bundle ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[104 mysql galera-bundle ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[114 neutron api ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[114 neutron api ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[115 neutron dhcp input ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[115 neutron dhcp input ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[116 neutron dhcp output ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[116 neutron dhcp output ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[106 neutron_l3 vrrp ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[106 neutron_l3 vrrp ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[118 neutron vxlan networks ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[118 neutron vxlan networks ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[136 neutron gre networks ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[136 neutron gre networks ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[113 nova_api ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[113 nova_api ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[138 nova_placement ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[138 nova_placement ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[137 nova_vnc_proxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[137 nova_vnc_proxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[105 ntp ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[105 ntp ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[130 pacemaker tcp ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[130 pacemaker tcp ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[131 pacemaker udp ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[131 pacemaker udp ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[140 panko-api ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[140 panko-api ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[109 rabbitmq-bundle ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[109 rabbitmq-bundle ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[108 redis-bundle ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[108 redis-bundle ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[132 sahara ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[132 sahara ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[122 swift proxy ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[122 swift proxy ipv6]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[123 swift storage ipv4]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Firewall[123 swift storage ipv6]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/require: subscribes to Package[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]/before: subscribes to Package[iptables-services]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]/before: subscribes to Service[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Package[iptables-services]/before: subscribes to Service[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/require: subscribes to Package[iptables-services]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/subscribe: subscribes to Package[iptables-services]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/before: subscribes to Service[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/before: subscribes to Service[ip6tables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Haproxy::Listen[redis]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Haproxy::Balancermember[redis]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Haproxy/Exec[haproxy-reload]/subscribe: subscribes to Class[Haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Property[haproxy-role-controller-0]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[control_vip-then-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[public_vip-then-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[redis_vip-then-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[storage_vip-then-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/notify: subscribes to Haproxy::Service[haproxy]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Install[haproxy]/before: subscribes to Haproxy::Config[haproxy]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Service[haproxy]/before: subscribes to Anchor[haproxy::haproxy::end]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Anchor[haproxy::haproxy::begin]/before: subscribes to Haproxy::Install[haproxy]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Haproxy::Listen[keystone_admin]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Haproxy::Balancermember[keystone_admin]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Haproxy::Listen[keystone_public]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Haproxy::Balancermember[keystone_public]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Haproxy::Listen[neutron]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Haproxy::Balancermember[neutron]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Haproxy::Listen[cinder]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Haproxy::Balancermember[cinder]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Haproxy::Listen[sahara]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Haproxy::Balancermember[sahara]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Haproxy::Listen[glance_api]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Haproxy::Balancermember[glance_api]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Haproxy::Listen[nova_osapi]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Haproxy::Balancermember[nova_osapi]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Haproxy::Listen[nova_placement]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Haproxy::Balancermember[nova_placement]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Haproxy::Listen[nova_metadata]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Haproxy::Balancermember[nova_metadata]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Haproxy::Listen[nova_novncproxy]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Haproxy::Balancermember[nova_novncproxy]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Haproxy::Listen[aodh]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Haproxy::Balancermember[aodh]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Haproxy::Listen[panko]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Haproxy::Balancermember[panko]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Haproxy::Listen[gnocchi]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Haproxy::Balancermember[gnocchi]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Haproxy::Listen[swift_proxy_server]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Haproxy::Balancermember[swift_proxy_server]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Haproxy::Listen[heat_api]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Haproxy::Balancermember[heat_api]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Haproxy::Listen[heat_cfn]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Haproxy::Balancermember[heat_cfn]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Resource::Ip[control_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Constraint::Order[control_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[control_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Resource::Ip[public_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Constraint::Order[public_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[public_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Resource::Ip[redis_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Constraint::Order[redis_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Resource::Ip[internal_api_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Resource::Ip[storage_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Constraint::Order[storage_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Resource::Ip[storage_mgmt_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/Concat_file[/etc/haproxy/haproxy.cfg]/before: subscribes to File[/etc/haproxy/haproxy.cfg]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[998 log all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[998 log all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[998 log all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[998 log all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relatio >nship with Package[iptables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[132 sahara ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[132 sahara ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[132 sahara ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[132 sahara ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[132 sahara ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[132 sahara ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[132 sahara ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[132 sahara ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[132 sahara ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[132 sahara ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[132 sahara ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[132 sahara ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[132 sahara ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[132 sahara ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/Concat_file[/etc/haproxy/haproxy.cfg]: Skipping automatic relationship with File[/etc/haproxy/haproxy.cfg]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: Adding autorequire relationship with File[/etc/haproxy]", > "Debug: Stage[main]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Settings]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Main]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Params]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Install]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Service]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Systemd::Unit_file[docker.service]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Stonith]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Property[Disable STONITH]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Resource_defaults]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Pacemaker::Haproxy_bundle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Pacemaker::Haproxy_bundle]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Base::Haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Base::Haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Class[Haproxy::Params]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Haproxy::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Instance[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Instance[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_evaluator]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_evaluator]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_listener]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_listener]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_notifier]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_notifier]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ca_certs]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ca_certs]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_central]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_central]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_notification]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_notification]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_mgr]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_mgr]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_mon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_mon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[certmonger_user]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[certmonger_user]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_backup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_backup]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_scheduler]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_scheduler]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_volume]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_volume]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[clustercheck]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[clustercheck]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[container_image_prepare]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[container_image_prepare]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[docker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[docker]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[glance_registry_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[glance_registry_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_metricd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_metricd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_statsd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_statsd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api_cloudwatch_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api_cloudwatch_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_engine]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_engine]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[horizon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[horizon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[iscsid]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[iscsid]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[kernel]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[kernel]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[keystone]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[keystone]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[memcached]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[memcached]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[mongodb_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[mongodb_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[mysql]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[mysql]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[mysql_client]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[mysql_client]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_plugin_ml2]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_plugin_ml2]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_dhcp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_dhcp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_l3]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_l3]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_ovs_agent]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_ovs_agent]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_conductor]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_conductor]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_consoleauth]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_consoleauth]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_scheduler]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_scheduler]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_vnc_proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_vnc_proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ntp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ntp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[logrotate_crond]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[logrotate_crond]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[panko_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[panko_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[oslo_messaging_rpc]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[oslo_messaging_rpc]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[oslo_messaging_notify]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[oslo_messaging_notify]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[redis]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[redis]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[sahara_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[sahara_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[sahara_engine]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[sahara_engine]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[snmp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[snmp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[sshd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[sshd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_ringbuilder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_ringbuilder]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_storage]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_storage]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[timezone]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[timezone]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[tripleo_firewall]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[tripleo_firewall]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[tripleo_packages]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[tripleo_packages]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[tuned]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[tuned]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[xinetd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[xinetd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_client]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_client]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_compute]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_compute]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_compute]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_compute]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_libvirt]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_libvirt]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_libvirt_guests]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_libvirt_guests]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_migration_target]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_migration_target]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_osd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_osd]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Haproxy::Stats]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Haproxy::Stats]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[haproxy.stats]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[haproxy.stats]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Haproxy::Horizon_endpoint]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Haproxy::Horizon_endpoint]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[horizon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[horizon]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[horizon_172.17.1.17_controller-0.internalapi.localdomain]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[horizon_172.17.1.17_controller-0.internalapi.localdomain]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[mysql]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[mysql]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[mysql-backup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[mysql-backup]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Firewall]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Firewall]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Firewall::Pre]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Firewall::Pre]: Resource is being skipped, unscheduling all events", > "Debug: Class[Firewall::Params]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Firewall::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Firewall]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Firewall]: Resource is being skipped, unscheduling all events", > "Debug: Class[Firewall::Linux]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Firewall::Linux]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux/Package[iptables]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux/Package[iptables]: Resource is being skipped, unscheduling all events", > "Debug: Class[Firewall::Linux::Redhat]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Firewall::Linux::Redhat]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Package[iptables-services]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Package[iptables-services]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[000 accept related established rules]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[000 accept related established rules]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[001 accept all icmp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[001 accept all icmp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[002 accept all to lo interface]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[002 accept all to lo interface]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[003 accept ssh]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[003 accept ssh]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[aodh_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[aodh_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[aodh_evaluator]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[aodh_evaluator]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[aodh_listener]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[aodh_listener]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[aodh_notifier]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[aodh_notifier]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ca_certs]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ca_certs]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ceilometer_agent_central]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ceilometer_agent_central]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ceilometer_agent_notification]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ceilometer_agent_notification]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ceph_mgr]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ceph_mgr]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ceph_mon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ceph_mon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[certmonger_user]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[certmonger_user]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[cinder_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[cinder_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[cinder_backup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[cinder_backup]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[cinder_scheduler]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[cinder_scheduler]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[cinder_volume]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[cinder_volume]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[clustercheck]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[clustercheck]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[container_image_prepare]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[container_image_prepare]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[docker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[docker]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[glance_registry_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[glance_registry_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_metricd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_metricd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_statsd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_statsd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[heat_api_cloudwatch_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[heat_api_cloudwatch_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[heat_api_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[heat_api_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[heat_engine]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[heat_engine]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[horizon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[horizon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[iscsid]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[iscsid]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[kernel]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[kernel]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[keystone]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[keystone]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[memcached]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[memcached]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[mongodb_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[mongodb_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[mysql]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[mysql]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[mysql_client]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[mysql_client]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_plugin_ml2]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_plugin_ml2]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_dhcp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_dhcp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_l3]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_l3]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_ovs_agent]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_ovs_agent]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_conductor]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_conductor]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_consoleauth]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_consoleauth]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_scheduler]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_scheduler]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_vnc_proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_vnc_proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ntp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ntp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[logrotate_crond]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[logrotate_crond]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[panko_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[panko_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[oslo_messaging_rpc]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[oslo_messaging_rpc]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[oslo_messaging_notify]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[oslo_messaging_notify]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[redis]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[redis]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[sahara_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[sahara_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[sahara_engine]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[sahara_engine]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[snmp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[snmp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[sshd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[sshd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[swift_proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[swift_proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[swift_ringbuilder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[swift_ringbuilder]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[swift_storage]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[swift_storage]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[timezone]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[timezone]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[tripleo_firewall]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[tripleo_firewall]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[tripleo_packages]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[tripleo_packages]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[tuned]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[tuned]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[xinetd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[xinetd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 mysql_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 mysql_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[redis]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[redis]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[redis]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[redis]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 redis_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 redis_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[haproxy-role-controller-0]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Property[haproxy-role-controller-0]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]: Resource is being skipped, unscheduling all events", > "Debug: Class[Systemd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Corosync]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1tngndk returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1tngndk property show | grep stonith-enabled | grep false > /dev/null 2>&1", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Anchor[haproxy::haproxy::begin]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Anchor[haproxy::haproxy::begin]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Install[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Install[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Class[Haproxy::Globals]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Haproxy::Globals]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-haproxy.stats_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-haproxy.stats_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[keystone_admin]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[keystone_admin]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[keystone_admin]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[keystone_admin]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 keystone_admin_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 keystone_admin_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[keystone_public]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[keystone_public]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[keystone_public]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[keystone_public]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 keystone_public_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 keystone_public_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[neutron]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[neutron]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[neutron]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[neutron]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 neutron_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 neutron_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[cinder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[cinder]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[cinder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[cinder]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 cinder_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 cinder_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[sahara]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[sahara]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[sahara]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[sahara]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 sahara_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 sahara_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 glance_api_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 glance_api_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[nova_osapi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[nova_osapi]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[nova_osapi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[nova_osapi]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_osapi_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_osapi_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_placement_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_placement_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_metadata_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_metadata_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[nova_novncproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[nova_novncproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[nova_novncproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[nova_novncproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[aodh]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[aodh]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[aodh]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[aodh]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 aodh_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 aodh_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[panko]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[panko]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[panko]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[panko]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 panko_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 panko_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 panko_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 panko_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[gnocchi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[gnocchi]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[gnocchi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[gnocchi]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 gnocchi_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 gnocchi_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[swift_proxy_server]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[swift_proxy_server]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[swift_proxy_server]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[swift_proxy_server]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 heat_api_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 heat_api_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[heat_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[heat_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[heat_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[heat_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 heat_cfn_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 heat_cfn_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-horizon_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-horizon_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-horizon_balancermember_horizon_172.17.1.17_controller-0.internalapi.localdomain]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-horizon_balancermember_horizon_172.17.1.17_controller-0.internalapi.localdomain]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-mysql_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-mysql_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-mysql_balancermember_mysql-backup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-mysql_balancermember_mysql-backup]: Resource is being skipped, unscheduling all events", > "Debug: Prefetching iptables resources for firewall", > "Debug: Puppet::Type::Firewall::ProviderIptables: [prefetch(resources)]", > "Debug: Puppet::Type::Firewall::ProviderIptables: [instances]", > "Debug: Executing: '/usr/sbin/iptables-save'", > "Debug: Prefetching ip6tables resources for firewall", > "Debug: Puppet::Type::Firewall::ProviderIp6tables: [prefetch(resources)]", > "Debug: Puppet::Type::Firewall::ProviderIp6tables: [instances]", > "Debug: Executing: '/usr/sbin/ip6tables-save'", > "Debug: Tripleo::Firewall::Rule[128 aodh-api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[128 aodh-api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[113 ceph_mgr]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[113 ceph_mgr]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[110 ceph_mon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[110 ceph_mon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[119 cinder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[119 cinder]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[120 iscsi initiator]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[120 iscsi initiator]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[112 glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[112 glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[129 gnocchi-api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[129 gnocchi-api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[140 gnocchi-statsd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[140 gnocchi-statsd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[107 haproxy stats]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[107 haproxy stats]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[125 heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[125 heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[125 heat_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[125 heat_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[127 horizon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[127 horizon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[111 keystone]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[111 keystone]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[121 memcached]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[121 memcached]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[104 mysql galera-bundle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[104 mysql galera-bundle]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[114 neutron api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[114 neutron api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[115 neutron dhcp input]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[115 neutron dhcp input]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[116 neutron dhcp output]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[116 neutron dhcp output]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[106 neutron_l3 vrrp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[106 neutron_l3 vrrp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[118 neutron vxlan networks]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[118 neutron vxlan networks]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[136 neutron gre networks]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[136 neutron gre networks]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[113 nova_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[113 nova_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[138 nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[138 nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[137 nova_vnc_proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[137 nova_vnc_proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[105 ntp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[105 ntp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[130 pacemaker tcp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[130 pacemaker tcp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[131 pacemaker udp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[131 pacemaker udp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[140 panko-api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[140 panko-api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[109 rabbitmq-bundle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[109 rabbitmq-bundle]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[108 redis-bundle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[108 redis-bundle]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[132 sahara]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[132 sahara]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[124 snmp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[124 snmp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[122 swift proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[122 swift proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[123 swift storage]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[123 swift storage]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 mysql_haproxy ipv4](provider=iptables): Inserting rule 100 mysql_haproxy ipv4", > "Debug: Firewall[100 mysql_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 mysql_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 3306 -m state --state NEW -j ACCEPT -m comment --comment 100 mysql_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 mysql_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 mysql_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: Executing: '/usr/libexec/iptables/iptables.init save'", > "Debug: /Firewall[100 mysql_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 mysql_haproxy] will propagate my refresh event", > "Debug: Firewall[100 mysql_haproxy ipv6](provider=ip6tables): Inserting rule 100 mysql_haproxy ipv6", > "Debug: Firewall[100 mysql_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 mysql_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 3306 -m state --state NEW -j ACCEPT -m comment --comment 100 mysql_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 mysql_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 mysql_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: Executing: '/usr/libexec/iptables/ip6tables.init save'", > "Debug: /Firewall[100 mysql_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 mysql_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 mysql_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 mysql_haproxy]", > "Debug: Concat::Fragment[haproxy-redis_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-redis_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-redis_balancermember_redis]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-redis_balancermember_redis]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 redis_haproxy ipv4](provider=iptables): Inserting rule 100 redis_haproxy ipv4", > "Debug: Firewall[100 redis_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 redis_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 6379 -m state --state NEW -j ACCEPT -m comment --comment 100 redis_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 redis_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 redis_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 redis_haproxy] will propagate my refresh event", > "Debug: Firewall[100 redis_haproxy ipv6](provider=ip6tables): Inserting rule 100 redis_haproxy ipv6", > "Debug: Firewall[100 redis_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 redis_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 6379 -m state --state NEW -j ACCEPT -m comment --comment 100 redis_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 redis_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 redis_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 redis_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 redis_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 redis_haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-d909ca returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-d909ca property show | grep haproxy-role | grep controller-0 | grep true > /dev/null 2>&1", > "Debug: property exists: property show | grep haproxy-role | grep controller-0 | grep true > /dev/null 2>&1 -> false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1sfl6bw returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1sfl6bw property set --node controller-0 haproxy-role=true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1sfl6bw diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1sfl6bw.orig returned 0 -> CIB updated", > "Debug: property create: property set --node controller-0 haproxy-role=true -> ", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Property[haproxy-role-controller-0]/Pcmk_property[property-controller-0-haproxy-role]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Property[haproxy-role-controller-0]/Pcmk_property[property-controller-0-haproxy-role]: The container Pacemaker::Property[haproxy-role-controller-0] will propagate my refresh event", > "Info: Pacemaker::Property[haproxy-role-controller-0]: Unscheduling all events on Pacemaker::Property[haproxy-role-controller-0]", > "Debug: Pacemaker::Resource::Ip[control_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[control_vip]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Resource::Ip[public_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[public_vip]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Resource::Ip[redis_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[redis_vip]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Resource::Ip[internal_api_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[internal_api_vip]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Resource::Ip[storage_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[storage_vip]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Resource::Ip[storage_mgmt_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[storage_mgmt_vip]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Install[haproxy]/Package[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Install[haproxy]/Package[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Config[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Config[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Concat[/etc/haproxy/haproxy.cfg]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat[/etc/haproxy/haproxy.cfg]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-00-header]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-00-header]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-haproxy-base]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-haproxy-base]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-keystone_admin_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-keystone_admin_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-keystone_admin_balancermember_keystone_admin]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-keystone_admin_balancermember_keystone_admin]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 keystone_admin_haproxy ipv4](provider=iptables): Inserting rule 100 keystone_admin_haproxy ipv4", > "Debug: Firewall[100 keystone_admin_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 keystone_admin_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 35357 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_admin_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 keystone_admin_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 keystone_admin_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 keystone_admin_haproxy] will propagate my refresh event", > "Debug: Firewall[100 keystone_admin_haproxy ipv6](provider=ip6tables): Inserting rule 100 keystone_admin_haproxy ipv6", > "Debug: Firewall[100 keystone_admin_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 keystone_admin_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 35357 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_admin_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 keystone_admin_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 keystone_admin_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 keystone_admin_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 keystone_admin_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 keystone_admin_haproxy]", > "Debug: Concat::Fragment[haproxy-keystone_public_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-keystone_public_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-keystone_public_balancermember_keystone_public]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-keystone_public_balancermember_keystone_public]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 keystone_public_haproxy ipv4](provider=iptables): Inserting rule 100 keystone_public_haproxy ipv4", > "Debug: Firewall[100 keystone_public_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 keystone_public_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 5000 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_public_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 keystone_public_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 keystone_public_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 keystone_public_haproxy] will propagate my refresh event", > "Debug: Firewall[100 keystone_public_haproxy ipv6](provider=ip6tables): Inserting rule 100 keystone_public_haproxy ipv6", > "Debug: Firewall[100 keystone_public_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 keystone_public_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 5000 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_public_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 keystone_public_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 keystone_public_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 keystone_public_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 keystone_public_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 keystone_public_haproxy]", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 keystone_public_haproxy_ssl ipv4", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 13000 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_public_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 keystone_public_haproxy_ssl ipv6", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 8 --wait -t filter -p tcp -m multiport --dports 13000 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_public_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-neutron_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-neutron_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-neutron_balancermember_neutron]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-neutron_balancermember_neutron]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 neutron_haproxy ipv4](provider=iptables): Inserting rule 100 neutron_haproxy ipv4", > "Debug: Firewall[100 neutron_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 neutron_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 9 --wait -t filter -p tcp -m multiport --dports 9696 -m state --state NEW -j ACCEPT -m comment --comment 100 neutron_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 neutron_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 neutron_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 neutron_haproxy] will propagate my refresh event", > "Debug: Firewall[100 neutron_haproxy ipv6](provider=ip6tables): Inserting rule 100 neutron_haproxy ipv6", > "Debug: Firewall[100 neutron_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 neutron_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 10 --wait -t filter -p tcp -m multiport --dports 9696 -m state --state NEW -j ACCEPT -m comment --comment 100 neutron_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 neutron_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 neutron_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 neutron_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 neutron_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 neutron_haproxy]", > "Debug: Firewall[100 neutron_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 neutron_haproxy_ssl ipv4", > "Debug: Firewall[100 neutron_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 neutron_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 10 --wait -t filter -p tcp -m multiport --dports 13696 -m state --state NEW -j ACCEPT -m comment --comment 100 neutron_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 neutron_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 neutron_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 neutron_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 neutron_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 neutron_haproxy_ssl ipv6", > "Debug: Firewall[100 neutron_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 neutron_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 11 --wait -t filter -p tcp -m multiport --dports 13696 -m state --state NEW -j ACCEPT -m comment --comment 100 neutron_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 neutron_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 neutron_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 neutron_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-cinder_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-cinder_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-cinder_balancermember_cinder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-cinder_balancermember_cinder]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 cinder_haproxy ipv4](provider=iptables): Inserting rule 100 cinder_haproxy ipv4", > "Debug: Firewall[100 cinder_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 cinder_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 8776 -m state --state NEW -j ACCEPT -m comment --comment 100 cinder_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 cinder_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 cinder_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 cinder_haproxy] will propagate my refresh event", > "Debug: Firewall[100 cinder_haproxy ipv6](provider=ip6tables): Inserting rule 100 cinder_haproxy ipv6", > "Debug: Firewall[100 cinder_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 cinder_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 8776 -m state --state NEW -j ACCEPT -m comment --comment 100 cinder_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 cinder_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 cinder_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 cinder_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 cinder_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 cinder_haproxy]", > "Debug: Firewall[100 cinder_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 cinder_haproxy_ssl ipv4", > "Debug: Firewall[100 cinder_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 cinder_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 13776 -m state --state NEW -j ACCEPT -m comment --comment 100 cinder_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 cinder_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 cinder_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 cinder_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 cinder_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 cinder_haproxy_ssl ipv6", > "Debug: Firewall[100 cinder_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 cinder_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 13776 -m state --state NEW -j ACCEPT -m comment --comment 100 cinder_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 cinder_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 cinder_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 cinder_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-sahara_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-sahara_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-sahara_balancermember_sahara]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-sahara_balancermember_sahara]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 sahara_haproxy ipv4](provider=iptables): Inserting rule 100 sahara_haproxy ipv4", > "Debug: Firewall[100 sahara_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 sahara_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 14 --wait -t filter -p tcp -m multiport --dports 8386 -m state --state NEW -j ACCEPT -m comment --comment 100 sahara_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 sahara_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 sahara_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 sahara_haproxy] will propagate my refresh event", > "Debug: Firewall[100 sahara_haproxy ipv6](provider=ip6tables): Inserting rule 100 sahara_haproxy ipv6", > "Debug: Firewall[100 sahara_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 sahara_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 8386 -m state --state NEW -j ACCEPT -m comment --comment 100 sahara_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 sahara_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 sahara_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 sahara_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 sahara_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 sahara_haproxy]", > "Debug: Firewall[100 sahara_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 sahara_haproxy_ssl ipv4", > "Debug: Firewall[100 sahara_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 sahara_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 13386 -m state --state NEW -j ACCEPT -m comment --comment 100 sahara_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 sahara_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 sahara_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 sahara_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 sahara_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 sahara_haproxy_ssl ipv6", > "Debug: Firewall[100 sahara_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 sahara_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 13386 -m state --state NEW -j ACCEPT -m comment --comment 100 sahara_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 sahara_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 sahara_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 sahara_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-glance_api_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-glance_api_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-glance_api_balancermember_glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-glance_api_balancermember_glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 glance_api_haproxy ipv4](provider=iptables): Inserting rule 100 glance_api_haproxy ipv4", > "Debug: Firewall[100 glance_api_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 glance_api_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 9292 -m state --state NEW -j ACCEPT -m comment --comment 100 glance_api_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 glance_api_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 glance_api_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 glance_api_haproxy] will propagate my refresh event", > "Debug: Firewall[100 glance_api_haproxy ipv6](provider=ip6tables): Inserting rule 100 glance_api_haproxy ipv6", > "Debug: Firewall[100 glance_api_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 glance_api_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 8 --wait -t filter -p tcp -m multiport --dports 9292 -m state --state NEW -j ACCEPT -m comment --comment 100 glance_api_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 glance_api_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 glance_api_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 glance_api_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 glance_api_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 glance_api_haproxy]", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 glance_api_haproxy_ssl ipv4", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 8 --wait -t filter -p tcp -m multiport --dports 13292 -m state --state NEW -j ACCEPT -m comment --comment 100 glance_api_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 glance_api_haproxy_ssl ipv6", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 9 --wait -t filter -p tcp -m multiport --dports 13292 -m state --state NEW -j ACCEPT -m comment --comment 100 glance_api_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-nova_osapi_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_osapi_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-nova_osapi_balancermember_nova_osapi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_osapi_balancermember_nova_osapi]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 nova_osapi_haproxy ipv4](provider=iptables): Inserting rule 100 nova_osapi_haproxy ipv4", > "Debug: Firewall[100 nova_osapi_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_osapi_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 8774 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_osapi_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 nova_osapi_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_osapi_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 nova_osapi_haproxy] will propagate my refresh event", > "Debug: Firewall[100 nova_osapi_haproxy ipv6](provider=ip6tables): Inserting rule 100 nova_osapi_haproxy ipv6", > "Debug: Firewall[100 nova_osapi_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_osapi_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 8774 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_osapi_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 nova_osapi_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_osapi_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 nova_osapi_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_osapi_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_osapi_haproxy]", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 nova_osapi_haproxy_ssl ipv4", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 13774 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_osapi_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 nova_osapi_haproxy_ssl ipv6", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 17 --wait -t filter -p tcp -m multiport --dports 13774 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_osapi_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-nova_placement_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_placement_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-nova_placement_balancermember_nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_placement_balancermember_nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 nova_placement_haproxy ipv4](provider=iptables): Inserting rule 100 nova_placement_haproxy ipv4", > "Debug: Firewall[100 nova_placement_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_placement_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 17 --wait -t filter -p tcp -m multiport --dports 8778 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_placement_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 nova_placement_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_placement_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 nova_placement_haproxy] will propagate my refresh event", > "Debug: Firewall[100 nova_placement_haproxy ipv6](provider=ip6tables): Inserting rule 100 nova_placement_haproxy ipv6", > "Debug: Firewall[100 nova_placement_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_placement_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 18 --wait -t filter -p tcp -m multiport --dports 8778 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_placement_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 nova_placement_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_placement_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 nova_placement_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_placement_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_placement_haproxy]", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 nova_placement_haproxy_ssl ipv4", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 18 --wait -t filter -p tcp -m multiport --dports 13778 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_placement_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 nova_placement_haproxy_ssl ipv6", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 19 --wait -t filter -p tcp -m multiport --dports 13778 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_placement_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-nova_metadata_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_metadata_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-nova_metadata_balancermember_nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_metadata_balancermember_nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 nova_metadata_haproxy ipv4](provider=iptables): Inserting rule 100 nova_metadata_haproxy ipv4", > "Debug: Firewall[100 nova_metadata_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_metadata_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 8775 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_metadata_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 nova_metadata_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_metadata_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 nova_metadata_haproxy] will propagate my refresh event", > "Debug: Firewall[100 nova_metadata_haproxy ipv6](provider=ip6tables): Inserting rule 100 nova_metadata_haproxy ipv6", > "Debug: Firewall[100 nova_metadata_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_metadata_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 8775 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_metadata_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 nova_metadata_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_metadata_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 nova_metadata_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_metadata_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_metadata_haproxy]", > "Debug: Concat::Fragment[haproxy-nova_novncproxy_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_novncproxy_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-nova_novncproxy_balancermember_nova_novncproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_novncproxy_balancermember_nova_novncproxy]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv4](provider=iptables): Inserting rule 100 nova_novncproxy_haproxy ipv4", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 6080 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_novncproxy_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy] will propagate my refresh event", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv6](provider=ip6tables): Inserting rule 100 nova_novncproxy_haproxy ipv6", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 17 --wait -t filter -p tcp -m multiport --dports 6080 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_novncproxy_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 nova_novncproxy_haproxy_ssl ipv4", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 17 --wait -t filter -p tcp -m multiport --dports 13080 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_novncproxy_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 nova_novncproxy_haproxy_ssl ipv6", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 18 --wait -t filter -p tcp -m multiport --dports 13080 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_novncproxy_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-aodh_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-aodh_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-aodh_balancermember_aodh]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-aodh_balancermember_aodh]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 aodh_haproxy ipv4](provider=iptables): Inserting rule 100 aodh_haproxy ipv4", > "Debug: Firewall[100 aodh_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 aodh_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 8042 -m state --state NEW -j ACCEPT -m comment --comment 100 aodh_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 aodh_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 aodh_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 aodh_haproxy] will propagate my refresh event", > "Debug: Firewall[100 aodh_haproxy ipv6](provider=ip6tables): Inserting rule 100 aodh_haproxy ipv6", > "Debug: Firewall[100 aodh_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 aodh_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 8042 -m state --state NEW -j ACCEPT -m comment --comment 100 aodh_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 aodh_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 aodh_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 aodh_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 aodh_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 aodh_haproxy]", > "Debug: Firewall[100 aodh_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 aodh_haproxy_ssl ipv4", > "Debug: Firewall[100 aodh_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 aodh_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 13042 -m state --state NEW -j ACCEPT -m comment --comment 100 aodh_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 aodh_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 aodh_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 aodh_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 aodh_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 aodh_haproxy_ssl ipv6", > "Debug: Firewall[100 aodh_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 aodh_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 13042 -m state --state NEW -j ACCEPT -m comment --comment 100 aodh_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 aodh_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 aodh_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 aodh_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-panko_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-panko_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-panko_balancermember_panko]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-panko_balancermember_panko]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 panko_haproxy ipv4](provider=iptables): Inserting rule 100 panko_haproxy ipv4", > "Debug: Firewall[100 panko_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 panko_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 24 --wait -t filter -p tcp -m multiport --dports 8977 -m state --state NEW -j ACCEPT -m comment --comment 100 panko_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 panko_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 panko_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 panko_haproxy] will propagate my refresh event", > "Debug: Firewall[100 panko_haproxy ipv6](provider=ip6tables): Inserting rule 100 panko_haproxy ipv6", > "Debug: Firewall[100 panko_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 panko_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 8977 -m state --state NEW -j ACCEPT -m comment --comment 100 panko_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 panko_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 panko_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 panko_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 panko_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 panko_haproxy]", > "Debug: Firewall[100 panko_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 panko_haproxy_ssl ipv4", > "Debug: Firewall[100 panko_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 panko_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 13977 -m state --state NEW -j ACCEPT -m comment --comment 100 panko_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 panko_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 panko_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 panko_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 panko_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 panko_haproxy_ssl ipv6", > "Debug: Firewall[100 panko_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 panko_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 26 --wait -t filter -p tcp -m multiport --dports 13977 -m state --state NEW -j ACCEPT -m comment --comment 100 panko_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 panko_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 panko_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 panko_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 panko_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 panko_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-gnocchi_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-gnocchi_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-gnocchi_balancermember_gnocchi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-gnocchi_balancermember_gnocchi]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 gnocchi_haproxy ipv4](provider=iptables): Inserting rule 100 gnocchi_haproxy ipv4", > "Debug: Firewall[100 gnocchi_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 gnocchi_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 11 --wait -t filter -p tcp -m multiport --dports 8041 -m state --state NEW -j ACCEPT -m comment --comment 100 gnocchi_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 gnocchi_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 gnocchi_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 gnocchi_haproxy] will propagate my refresh event", > "Debug: Firewall[100 gnocchi_haproxy ipv6](provider=ip6tables): Inserting rule 100 gnocchi_haproxy ipv6", > "Debug: Firewall[100 gnocchi_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 gnocchi_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 8041 -m state --state NEW -j ACCEPT -m comment --comment 100 gnocchi_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 gnocchi_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 gnocchi_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 gnocchi_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 gnocchi_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 gnocchi_haproxy]", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 gnocchi_haproxy_ssl ipv4", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 13041 -m state --state NEW -j ACCEPT -m comment --comment 100 gnocchi_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 gnocchi_haproxy_ssl ipv6", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 13 --wait -t filter -p tcp -m multiport --dports 13041 -m state --state NEW -j ACCEPT -m comment --comment 100 gnocchi_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-swift_proxy_server_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-swift_proxy_server_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-swift_proxy_server_balancermember_swift_proxy_server]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-swift_proxy_server_balancermember_swift_proxy_server]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv4](provider=iptables): Inserting rule 100 swift_proxy_server_haproxy ipv4", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 31 --wait -t filter -p tcp -m multiport --dports 8080 -m state --state NEW -j ACCEPT -m comment --comment 100 swift_proxy_server_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy] will propagate my refresh event", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv6](provider=ip6tables): Inserting rule 100 swift_proxy_server_haproxy ipv6", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 32 --wait -t filter -p tcp -m multiport --dports 8080 -m state --state NEW -j ACCEPT -m comment --comment 100 swift_proxy_server_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 swift_proxy_server_haproxy_ssl ipv4", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 32 --wait -t filter -p tcp -m multiport --dports 13808 -m state --state NEW -j ACCEPT -m comment --comment 100 swift_proxy_server_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 swift_proxy_server_haproxy_ssl ipv6", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 33 --wait -t filter -p tcp -m multiport --dports 13808 -m state --state NEW -j ACCEPT -m comment --comment 100 swift_proxy_server_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-heat_api_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-heat_api_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-heat_api_balancermember_heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-heat_api_balancermember_heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 heat_api_haproxy ipv4](provider=iptables): Inserting rule 100 heat_api_haproxy ipv4", > "Debug: Firewall[100 heat_api_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 heat_api_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 13 --wait -t filter -p tcp -m multiport --dports 8004 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_api_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 heat_api_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 heat_api_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 heat_api_haproxy] will propagate my refresh event", > "Debug: Firewall[100 heat_api_haproxy ipv6](provider=ip6tables): Inserting rule 100 heat_api_haproxy ipv6", > "Debug: Firewall[100 heat_api_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 heat_api_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 14 --wait -t filter -p tcp -m multiport --dports 8004 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_api_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 heat_api_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 heat_api_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 heat_api_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 heat_api_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 heat_api_haproxy]", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 heat_api_haproxy_ssl ipv4", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 14 --wait -t filter -p tcp -m multiport --dports 13004 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_api_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 heat_api_haproxy_ssl ipv6", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 13004 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_api_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-heat_cfn_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-heat_cfn_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-heat_cfn_balancermember_heat_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-heat_cfn_balancermember_heat_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 heat_cfn_haproxy ipv4](provider=iptables): Inserting rule 100 heat_cfn_haproxy ipv4", > "Debug: Firewall[100 heat_cfn_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 heat_cfn_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 8000 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_cfn_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 heat_cfn_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 heat_cfn_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 heat_cfn_haproxy] will propagate my refresh event", > "Debug: Firewall[100 heat_cfn_haproxy ipv6](provider=ip6tables): Inserting rule 100 heat_cfn_haproxy ipv6", > "Debug: Firewall[100 heat_cfn_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 heat_cfn_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 8000 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_cfn_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 heat_cfn_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 heat_cfn_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 heat_cfn_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 heat_cfn_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 heat_cfn_haproxy]", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 heat_cfn_haproxy_ssl ipv4", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 13005 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_cfn_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 heat_cfn_haproxy_ssl ipv6", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 17 --wait -t filter -p tcp -m multiport --dports 13005 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_cfn_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]", > "Debug: Class[Tripleo::Firewall::Post]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Firewall::Post]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[998 log all]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[998 log all]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[999 drop all]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[999 drop all]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_v4_rules_cleanup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_v4_rules_cleanup]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_v6_rules_cleanup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_v6_rules_cleanup]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-9ymxwy returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-9ymxwy constraint list | grep location-ip-192.168.24.7 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1wtjs84 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1wtjs84 resource show ip-192.168.24.7 > /dev/null 2>&1", > "Debug: Exists: resource ip-192.168.24.7 exists 1 location exists 1 resource deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1wj1o5j returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1wj1o5j resource create ip-192.168.24.7 IPaddr2 ip=192.168.24.7 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1wj1o5j diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1wj1o5j.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-192.168.24.7 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-192.168.24.7 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-yqtf5f returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-yqtf5f constraint location ip-192.168.24.7 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-yqtf5f diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-yqtf5f.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-4sh2ol returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-4sh2ol resource enable ip-192.168.24.7", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-4sh2ol diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-4sh2ol.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Resource::Ip[control_vip]/Pcmk_resource[ip-192.168.24.7]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Resource::Ip[control_vip]/Pcmk_resource[ip-192.168.24.7]: The container Pacemaker::Resource::Ip[control_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[control_vip]: Unscheduling all events on Pacemaker::Resource::Ip[control_vip]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-157zrs returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-157zrs constraint list | grep location-ip-10.0.0.111 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1fag2k3 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1fag2k3 resource show ip-10.0.0.111 > /dev/null 2>&1", > "Debug: Exists: resource ip-10.0.0.111 exists 1 location exists 1 resource deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-k6eust returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-k6eust resource create ip-10.0.0.111 IPaddr2 ip=10.0.0.111 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-k6eust diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-k6eust.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-10.0.0.111 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-10.0.0.111 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-dxs5f returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-dxs5f constraint location ip-10.0.0.111 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-dxs5f diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-dxs5f.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-y9wubf returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-y9wubf resource enable ip-10.0.0.111", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-y9wubf diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-y9wubf.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Resource::Ip[public_vip]/Pcmk_resource[ip-10.0.0.111]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Resource::Ip[public_vip]/Pcmk_resource[ip-10.0.0.111]: The container Pacemaker::Resource::Ip[public_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[public_vip]: Unscheduling all events on Pacemaker::Resource::Ip[public_vip]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-azo6a1 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-azo6a1 constraint list | grep location-ip-172.17.1.10 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-u0h8w returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-u0h8w resource show ip-172.17.1.10 > /dev/null 2>&1", > "Debug: Exists: resource ip-172.17.1.10 exists 1 location exists 1 resource deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-3p1g0t returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-3p1g0t resource create ip-172.17.1.10 IPaddr2 ip=172.17.1.10 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-3p1g0t diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-3p1g0t.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-172.17.1.10 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-172.17.1.10 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-migirr returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-migirr constraint location ip-172.17.1.10 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-migirr diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-migirr.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-pm5zkc returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-pm5zkc resource enable ip-172.17.1.10", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-pm5zkc diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-pm5zkc.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Resource::Ip[redis_vip]/Pcmk_resource[ip-172.17.1.10]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Resource::Ip[redis_vip]/Pcmk_resource[ip-172.17.1.10]: The container Pacemaker::Resource::Ip[redis_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[redis_vip]: Unscheduling all events on Pacemaker::Resource::Ip[redis_vip]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-lbbwv5 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-lbbwv5 constraint list | grep location-ip-172.17.1.15 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-hyjrkk returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-hyjrkk resource show ip-172.17.1.15 > /dev/null 2>&1", > "Debug: Exists: resource ip-172.17.1.15 exists 1 location exists 1 resource deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1ahzaj5 returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1ahzaj5 resource create ip-172.17.1.15 IPaddr2 ip=172.17.1.15 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1ahzaj5 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1ahzaj5.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-172.17.1.15 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-172.17.1.15 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-bdnsv returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-bdnsv constraint location ip-172.17.1.15 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-bdnsv diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-bdnsv.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-ocm1oa returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-ocm1oa resource enable ip-172.17.1.15", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-ocm1oa diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-ocm1oa.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Resource::Ip[internal_api_vip]/Pcmk_resource[ip-172.17.1.15]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Resource::Ip[internal_api_vip]/Pcmk_resource[ip-172.17.1.15]: The container Pacemaker::Resource::Ip[internal_api_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[internal_api_vip]: Unscheduling all events on Pacemaker::Resource::Ip[internal_api_vip]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1tfhq7h returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1tfhq7h constraint list | grep location-ip-172.17.3.21 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-91k4d4 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-91k4d4 resource show ip-172.17.3.21 > /dev/null 2>&1", > "Debug: Exists: resource ip-172.17.3.21 exists 1 location exists 1 resource deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-lpr723 returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-lpr723 resource create ip-172.17.3.21 IPaddr2 ip=172.17.3.21 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-lpr723 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-lpr723.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-172.17.3.21 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-172.17.3.21 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-souxb4 returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-souxb4 constraint location ip-172.17.3.21 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-souxb4 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-souxb4.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-fokwys returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-fokwys resource enable ip-172.17.3.21", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-fokwys diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-fokwys.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Resource::Ip[storage_vip]/Pcmk_resource[ip-172.17.3.21]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Resource::Ip[storage_vip]/Pcmk_resource[ip-172.17.3.21]: The container Pacemaker::Resource::Ip[storage_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[storage_vip]: Unscheduling all events on Pacemaker::Resource::Ip[storage_vip]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1i3xify returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1i3xify constraint list | grep location-ip-172.17.4.13 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-10zd1j0 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-10zd1j0 resource show ip-172.17.4.13 > /dev/null 2>&1", > "Debug: Exists: resource ip-172.17.4.13 exists 1 location exists 1 resource deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-2hfa4e returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-2hfa4e resource create ip-172.17.4.13 IPaddr2 ip=172.17.4.13 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-2hfa4e diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-2hfa4e.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-172.17.4.13 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-172.17.4.13 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-glpvxz returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-glpvxz constraint location ip-172.17.4.13 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-glpvxz diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-glpvxz.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-8wajws returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-8wajws resource enable ip-172.17.4.13", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-8wajws diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-8wajws.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Resource::Ip[storage_mgmt_vip]/Pcmk_resource[ip-172.17.4.13]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Resource::Ip[storage_mgmt_vip]/Pcmk_resource[ip-172.17.4.13]: The container Pacemaker::Resource::Ip[storage_mgmt_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[storage_mgmt_vip]: Unscheduling all events on Pacemaker::Resource::Ip[storage_mgmt_vip]", > "Debug: Pacemaker::Resource::Bundle[haproxy-bundle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Bundle[haproxy-bundle]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-7mpg0t returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-7mpg0t constraint list | grep location-haproxy-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1kjlebp returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1kjlebp resource show haproxy-bundle > /dev/null 2>&1", > "Debug: Exists: bundle haproxy-bundle exists 1 location exists 1 deep_compare: true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-u7xgcu returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-u7xgcu resource bundle create haproxy-bundle container docker image=192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest replicas=1 options=\"--user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" run-command=\"/bin/bash /usr/local/bin/kolla_start\" network=host storage-map id=haproxy-cfg-files source-dir=/var/lib/kolla/config_files/haproxy.json target-dir=/var/lib/kolla/config_files/config.json options=ro storage-map id=haproxy-cfg-data source-dir=/var/lib/config-data/puppet-generated/haproxy/ target-dir=/var/lib/kolla/config_files/src options=ro storage-map id=haproxy-hosts source-dir=/etc/hosts target-dir=/etc/hosts options=ro storage-map id=haproxy-localtime source-dir=/etc/localtime target-dir=/etc/localtime options=ro storage-map id=haproxy-var-lib source-dir=/var/lib/haproxy target-dir=/var/lib/haproxy options=rw storage-map id=haproxy-pki-extracted source-dir=/etc/pki/ca-trust/extracted target-dir=/etc/pki/ca-trust/extracted options=ro storage-map id=haproxy-pki-ca-bundle-crt source-dir=/etc/pki/tls/certs/ca-bundle.crt target-dir=/etc/pki/tls/certs/ca-bundle.crt options=ro storage-map id=haproxy-pki-ca-bundle-trust-crt source-dir=/etc/pki/tls/certs/ca-bundle.trust.crt target-dir=/etc/pki/tls/certs/ca-bundle.trust.crt options=ro storage-map id=haproxy-pki-cert source-dir=/etc/pki/tls/cert.pem target-dir=/etc/pki/tls/cert.pem options=ro storage-map id=haproxy-dev-log source-dir=/dev/log target-dir=/dev/log options=rw --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-u7xgcu diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-u7xgcu.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location haproxy-bundle rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location haproxy-bundle rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-9sxhyq returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-9sxhyq constraint location haproxy-bundle rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-9sxhyq diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-9sxhyq.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-7r4sd9 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-7r4sd9 resource enable haproxy-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-7r4sd9 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-7r4sd9.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/Pcmk_bundle[haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/Pcmk_bundle[haproxy-bundle]: The container Pacemaker::Resource::Bundle[haproxy-bundle] will propagate my refresh event", > "Info: Pacemaker::Resource::Bundle[haproxy-bundle]: Unscheduling all events on Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: Pacemaker::Constraint::Order[control_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[control_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Constraint::Order[public_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[public_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Constraint::Order[redis_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[redis_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Constraint::Order[storage_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[storage_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-hk71kv returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-hk71kv constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-3veg5j returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-3veg5j constraint order start ip-192.168.24.7 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-3veg5j diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-3veg5j.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Constraint::Order[control_vip-then-haproxy]/Pcmk_constraint[order-ip-192.168.24.7-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Constraint::Order[control_vip-then-haproxy]/Pcmk_constraint[order-ip-192.168.24.7-haproxy-bundle]: The container Pacemaker::Constraint::Order[control_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[control_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[control_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[control_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[control_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1hscmm6 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1hscmm6 constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1mvswjp returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1mvswjp constraint colocation add ip-192.168.24.7 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1mvswjp diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1mvswjp.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Constraint::Colocation[control_vip-with-haproxy]/Pcmk_constraint[colo-ip-192.168.24.7-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Constraint::Colocation[control_vip-with-haproxy]/Pcmk_constraint[colo-ip-192.168.24.7-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[control_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[control_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[control_vip-with-haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1loin9h returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1loin9h constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-d18836 returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-d18836 constraint order start ip-10.0.0.111 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-d18836 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-d18836.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Constraint::Order[public_vip-then-haproxy]/Pcmk_constraint[order-ip-10.0.0.111-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Constraint::Order[public_vip-then-haproxy]/Pcmk_constraint[order-ip-10.0.0.111-haproxy-bundle]: The container Pacemaker::Constraint::Order[public_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[public_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[public_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[public_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[public_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1efegkq returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1efegkq constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-l569h returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-l569h constraint colocation add ip-10.0.0.111 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-l569h diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-l569h.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Constraint::Colocation[public_vip-with-haproxy]/Pcmk_constraint[colo-ip-10.0.0.111-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Constraint::Colocation[public_vip-with-haproxy]/Pcmk_constraint[colo-ip-10.0.0.111-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[public_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[public_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[public_vip-with-haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1dl3xl0 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1dl3xl0 constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-98fi1v returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-98fi1v constraint order start ip-172.17.1.10 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-98fi1v diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-98fi1v.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Constraint::Order[redis_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.1.10-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Constraint::Order[redis_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.1.10-haproxy-bundle]: The container Pacemaker::Constraint::Order[redis_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[redis_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[redis_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1ga82n5 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1ga82n5 constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-6xlm3g returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-6xlm3g constraint colocation add ip-172.17.1.10 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-6xlm3g diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-6xlm3g.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.1.10-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.1.10-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[redis_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1jp47e0 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1jp47e0 constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-nr8vit returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-nr8vit constraint order start ip-172.17.1.15 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-nr8vit diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-nr8vit.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.1.15-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.1.15-haproxy-bundle]: The container Pacemaker::Constraint::Order[internal_api_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-gy35oy returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-gy35oy constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-17qzvoy returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-17qzvoy constraint colocation add ip-172.17.1.15 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-17qzvoy diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-17qzvoy.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.1.15-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.1.15-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1vnkwjd returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1vnkwjd constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-gtc59h returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-gtc59h constraint order start ip-172.17.3.21 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-gtc59h diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-gtc59h.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Constraint::Order[storage_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.3.21-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Constraint::Order[storage_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.3.21-haproxy-bundle]: The container Pacemaker::Constraint::Order[storage_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[storage_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[storage_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-p94h0t returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-p94h0t constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1xc4pnl returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1xc4pnl constraint colocation add ip-172.17.3.21 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1xc4pnl diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1xc4pnl.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.3.21-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.3.21-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[storage_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1vizr3b returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1vizr3b constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1ck5y78 returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1ck5y78 constraint order start ip-172.17.4.13 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1ck5y78 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1ck5y78.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.4.13-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.4.13-haproxy-bundle]: The container Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-10hzim4 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-10hzim4 constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1r3zivo returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1r3zivo constraint colocation add ip-172.17.4.13 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1r3zivo diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180921-8-1r3zivo.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.4.13-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.4.13-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]", > "Info: Computing checksum on file /etc/haproxy/haproxy.cfg", > "Info: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: Filebucketed /etc/haproxy/haproxy.cfg to puppet with sum 1f337186b0e1ba5ee82760cb437fb810", > "Debug: Executing: '/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg20180921-8-wb9qfu -c'", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: [WARNING] 263/123802 (3161) : parsing [/etc/haproxy/haproxy.cfg20180921-8-wb9qfu:170] : HTTP log/header format not usable with proxy 'nova_novncproxy' (needs 'mode http').", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: Configuration file is valid", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/content: content changed '{md5}1f337186b0e1ba5ee82760cb437fb810' to '{md5}9200d9c4be339b9075c4e48ff8a14619'", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/mode: mode changed '0644' to '0640'", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: The container Concat[/etc/haproxy/haproxy.cfg] will propagate my refresh event", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: The container /etc/haproxy/haproxy.cfg will propagate my refresh event", > "Debug: /etc/haproxy/haproxy.cfg: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /etc/haproxy/haproxy.cfg: Resource is being skipped, unscheduling all events", > "Info: /etc/haproxy/haproxy.cfg: Unscheduling all events on /etc/haproxy/haproxy.cfg", > "Info: Concat[/etc/haproxy/haproxy.cfg]: Unscheduling all events on Concat[/etc/haproxy/haproxy.cfg]", > "Debug: Haproxy::Service[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Service[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Anchor[haproxy::haproxy::end]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Anchor[haproxy::haproxy::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Haproxy/Exec[haproxy-reload]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Tripleo::Profile::Base::Haproxy/Exec[haproxy-reload]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[puppet]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[hourly]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[daily]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[weekly]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[monthly]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[never]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Filebucket[puppet]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Finishing transaction 16119260", > "Notice: Applied catalog in 168.41 seconds", > " Total: 90", > " Success: 90", > " Skipped: 36", > " Out of sync: 89", > " Changed: 89", > " Concat file: 0.00", > " Concat fragment: 0.00", > " Pcmk bundle: 11.27", > " Last run: 1537533482", > " Total: 171.95", > " Firewall: 22.44", > " Pcmk constraint: 42.31", > " Pcmk property: 5.50", > " Config retrieval: 5.58", > " Pcmk resource: 84.79", > " Config: 1537533308", > "Debug: Finishing transaction 62491120", > "+ TAGS=file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", > "+ CONFIG='include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle'", > "+ puppet apply --debug --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation -e 'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle'", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/pacemaker/haproxy_with_vip.pp\", 65]:", > "Warning: Scope(Haproxy::Config[haproxy]): haproxy: The $merge_options parameter will default to true in the next major release. Please review the documentation regarding the implications." > ] >} > >TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks2.json exists] ******** >Friday 21 September 2018 08:38:23 -0400 (0:00:14.319) 0:21:46.317 ****** >ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Run docker-puppet tasks (bootstrap tasks) for step 2] ******************** >Friday 21 September 2018 08:38:24 -0400 (0:00:00.347) 0:21:46.665 ****** >skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > >TASK [Debug output for task which failed: Run docker-puppet tasks (bootstrap tasks) for step 2] *** >Friday 21 September 2018 08:38:24 -0400 (0:00:00.108) 0:21:46.774 ****** >skipping: [controller-0] => {} >skipping: [compute-0] => {} >skipping: [ceph-0] => {} > >PLAY [External deployment step 3] ********************************************** > >TASK [set blacklisted_hostnames] *********************************************** >Friday 21 September 2018 08:38:24 -0400 (0:00:00.213) 0:21:46.987 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [create ceph-ansible temp dirs] ******************************************* >Friday 21 September 2018 08:38:24 -0400 (0:00:00.037) 0:21:47.025 ****** >skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/group_vars) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/group_vars", "skip_reason": "Conditional result was False"} >skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/host_vars) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/host_vars", "skip_reason": "Conditional result was False"} >skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/fetch_dir) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/fetch_dir", "skip_reason": "Conditional result was False"} > >TASK [generate inventory] ****************************************************** >Friday 21 September 2018 08:38:24 -0400 (0:00:00.058) 0:21:47.083 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible group vars all] ***************************************** >Friday 21 September 2018 08:38:24 -0400 (0:00:00.035) 0:21:47.118 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate ceph-ansible group vars all] ************************************ >Friday 21 September 2018 08:38:24 -0400 (0:00:00.037) 0:21:47.156 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible extra vars] ********************************************* >Friday 21 September 2018 08:38:24 -0400 (0:00:00.035) 0:21:47.192 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate ceph-ansible extra vars] **************************************** >Friday 21 September 2018 08:38:24 -0400 (0:00:00.036) 0:21:47.228 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate nodes-uuid data file] ******************************************* >Friday 21 September 2018 08:38:24 -0400 (0:00:00.037) 0:21:47.266 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate nodes-uuid playbook] ******************************************** >Friday 21 September 2018 08:38:24 -0400 (0:00:00.035) 0:21:47.302 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [run nodes-uuid] ********************************************************** >Friday 21 September 2018 08:38:24 -0400 (0:00:00.036) 0:21:47.338 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible params from Heat] *************************************** >Friday 21 September 2018 08:38:24 -0400 (0:00:00.043) 0:21:47.381 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible playbooks] ********************************************** >Friday 21 September 2018 08:38:24 -0400 (0:00:00.038) 0:21:47.420 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible command] ************************************************ >Friday 21 September 2018 08:38:24 -0400 (0:00:00.037) 0:21:47.457 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [run ceph-ansible] ******************************************************** >Friday 21 September 2018 08:38:25 -0400 (0:00:00.042) 0:21:47.500 ****** >skipping: [undercloud] => (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": false, "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible group vars mgrs] **************************************** >Friday 21 September 2018 08:38:25 -0400 (0:00:00.046) 0:21:47.547 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate ceph-ansible group vars mgrs] *********************************** >Friday 21 September 2018 08:38:25 -0400 (0:00:00.037) 0:21:47.585 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible group vars mons] **************************************** >Friday 21 September 2018 08:38:25 -0400 (0:00:00.036) 0:21:47.622 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate ceph-ansible group vars mons] *********************************** >Friday 21 September 2018 08:38:25 -0400 (0:00:00.038) 0:21:47.660 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set_fact] **************************************************************** >Friday 21 September 2018 08:38:25 -0400 (0:00:00.038) 0:21:47.699 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Create temp file for prepare parameter] ********************************** >Friday 21 September 2018 08:38:25 -0400 (0:00:00.036) 0:21:47.735 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Create temp file for role data] ****************************************** >Friday 21 September 2018 08:38:25 -0400 (0:00:00.037) 0:21:47.772 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write ContainerImagePrepare parameter file] ****************************** >Friday 21 September 2018 08:38:25 -0400 (0:00:00.036) 0:21:47.808 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write role data file] **************************************************** >Friday 21 September 2018 08:38:25 -0400 (0:00:00.046) 0:21:47.855 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run tripleo-container-image-prepare] ************************************* >Friday 21 September 2018 08:38:25 -0400 (0:00:00.044) 0:21:47.899 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Delete param file] ******************************************************* >Friday 21 September 2018 08:38:25 -0400 (0:00:00.037) 0:21:47.937 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Delete role file] ******************************************************** >Friday 21 September 2018 08:38:25 -0400 (0:00:00.039) 0:21:47.976 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible group vars clients] ************************************* >Friday 21 September 2018 08:38:25 -0400 (0:00:00.036) 0:21:48.013 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate ceph-ansible group vars clients] ******************************** >Friday 21 September 2018 08:38:25 -0400 (0:00:00.037) 0:21:48.050 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible group vars osds] **************************************** >Friday 21 September 2018 08:38:25 -0400 (0:00:00.036) 0:21:48.086 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate ceph-ansible group vars osds] *********************************** >Friday 21 September 2018 08:38:25 -0400 (0:00:00.035) 0:21:48.122 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >PLAY [Overcloud deploy step tasks for 3] *************************************** > >PLAY [Overcloud common deploy step tasks 3] ************************************ > >TASK [Create /var/lib/tripleo-config directory] ******************************** >Friday 21 September 2018 08:38:25 -0400 (0:00:00.064) 0:21:48.186 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write the puppet step_config manifest] *********************************** >Friday 21 September 2018 08:38:25 -0400 (0:00:00.100) 0:21:48.287 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Create /var/lib/docker-puppet] ******************************************* >Friday 21 September 2018 08:38:25 -0400 (0:00:00.104) 0:21:48.391 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write docker-puppet.json file] ******************************************* >Friday 21 September 2018 08:38:26 -0400 (0:00:00.108) 0:21:48.500 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Create /var/lib/docker-config-scripts] *********************************** >Friday 21 September 2018 08:38:26 -0400 (0:00:00.107) 0:21:48.608 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >Friday 21 September 2018 08:38:26 -0400 (0:00:00.098) 0:21:48.706 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write docker config scripts] ********************************************* >Friday 21 September 2018 08:38:26 -0400 (0:00:00.105) 0:21:48.811 ****** >skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "(cellv2) Running cell_v2 host discovery"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr "," " "); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +"%s") + ${timeout} ))\necho "(cellv2) Waiting ${timeout} seconds for hosts to register"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo "(cellv2) compute node $host has registered"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in "${!discoverable_hosts[@]}"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo "(cellv2) compute node $host has not registered"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +"%s") ))\n if (( $finished == 1 )); then\n echo "(cellv2) All nodes registered"\n break\n elif (( $remaining <= 0 )); then\n echo "(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless"\n echo "(cellv2) Expected host list:" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\n echo "(cellv2) Detected host list:" $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\' | sort -u | tr \'\\n\', \' \')\n break\n else\n echo "(cellv2) Waiting ${remaining} seconds for hosts to register"\n sleep $loop_wait\n fi\ndone\necho "(cellv2) Running host discovery..."\nsu nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose"\n', 'mode': u'0700'}, 'key': u'nova_api_discover_hosts.sh'}) => {"changed": false, "item": {"key": "nova_api_discover_hosts.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "Check if secret already exists"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo "Failed to check secrets, check if Barbican in enabled and responding properly"\n exit $rc;\nfi\nif [ -z "$secret_href" ]; then\n echo "Create new secret"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type="application/octet-stream" --algorithm aes --bit-length 256 --mode ctr key -f value -c "Order href")\nfi\n', 'mode': u'0700'}, 'key': u'create_swift_secret.sh'}) => {"changed": false, "item": {"key": "create_swift_secret.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": false, "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho "retrieve key_id"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ "$secret_href" ]; then\n echo "set key_id in keymaster.conf"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c "Secret href")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo "no key, wait for $loop_wait and check again"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho "Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly"\nexit 1\n', 'mode': u'0700'}, 'key': u'set_swift_keymaster_key_id.sh'}) => {"changed": false, "item": {"key": "set_swift_keymaster_key_id.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": false, "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-\'\'}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho "{\\"step\\": ${STEP}}" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e "${CONFIG}"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n', 'mode': u'0700'}, 'key': u'docker_puppet_apply.sh'}) => {"changed": false, "item": {"key": "docker_puppet_apply.sh", "value": {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'content': u'#!/usr/bin/env python\n#\n# Copyright 2018 Red Hat Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the "License"); you may\n# not use this file except in compliance with the License. You may obtain\n# a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations\n# under the License.\nfrom __future__ import print_function\nimport logging\nimport os\nimport pwd\nimport stat\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\nLOG = logging.getLogger(\'nova_statedir\')\n\n\nclass PathManager(object):\n """Helper class to manipulate ownership of a given path"""\n def __init__(self, path):\n self.path = path\n self._update()\n\n def _update(self):\n statinfo = os.stat(self.path)\n self.is_dir = stat.S_ISDIR(statinfo.st_mode)\n self.uid = statinfo.st_uid\n self.gid = statinfo.st_gid\n\n def __str__(self):\n return "uid: {} gid: {} path: {}{}".format(\n self.uid,\n self.gid,\n self.path,\n \'/\' if self.is_dir else \'\'\n )\n\n def has_owner(self, uid, gid):\n return self.uid == uid and self.gid == gid\n\n def has_either(self, uid, gid):\n return self.uid == uid or self.gid == gid\n\n def chown(self, uid, gid):\n target_uid = -1\n target_gid = -1\n if self.uid != uid:\n target_uid = uid\n if self.gid != gid:\n target_gid = gid\n if (target_uid, target_gid) != (-1, -1):\n LOG.info(\'Changing ownership of %s from %d:%d to %d:%d\',\n self.path,\n self.uid,\n self.gid,\n self.uid if target_uid == -1 else target_uid,\n self.gid if target_gid == -1 else target_gid)\n os.chown(self.path, target_uid, target_gid)\n self._update()\n else:\n LOG.info(\'Ownership of %s already %d:%d\',\n self.path,\n uid,\n gid)\n\n\nclass NovaStatedirOwnershipManager(object):\n """Class to manipulate the ownership of the nova statedir (/var/lib/nova).\n\n The nova uid/gid differ on the host and container images. An upgrade\n that switches from host systemd services to docker requires a change in\n ownership. Previously this was a naive recursive chown, however this\n causes issues if nova instance are shared via an NFS mount: any open\n filehandles in qemu/libvirt fail with an I/O error (LP1778465).\n\n Instead the upgrade/FFU ansible tasks now lay down a marker file when\n stopping and disabling the host systemd services. We use this file to\n determine the host nova uid/gid. We then walk the tree and update any\n files that have the host uid/gid to the docker nova uid/gid. As files\n owned by root/qemu etc... are ignored this avoids the issues with open\n filehandles. The marker is removed once the tree has been walked.\n\n For subsequent runs, or for a new deployment, we simply ensure that the\n docker nova user/group owns all directories. This is required as the\n directories are created with root ownership in host_prep_tasks (the\n docker nova uid/gid is not known in this context).\n """\n def __init__(self, statedir, upgrade_marker=\'upgrade_marker\',\n nova_user=\'nova\'):\n self.statedir = statedir\n self.nova_user = nova_user\n\n self.upgrade_marker_path = os.path.join(statedir, upgrade_marker)\n self.upgrade = os.path.exists(self.upgrade_marker_path)\n\n self.target_uid, self.target_gid = self._get_nova_ids()\n self.previous_uid, self.previous_gid = self._get_previous_nova_ids()\n self.id_change = (self.target_uid, self.target_gid) != \\\n (self.previous_uid, self.previous_gid)\n\n def _get_nova_ids(self):\n nova_uid, nova_gid = pwd.getpwnam(self.nova_user)[2:4]\n return nova_uid, nova_gid\n\n def _get_previous_nova_ids(self):\n if self.upgrade:\n statinfo = os.stat(self.upgrade_marker_path)\n return statinfo.st_uid, statinfo.st_gid\n else:\n return self._get_nova_ids()\n\n def _walk(self, top):\n for f in os.listdir(top):\n pathname = os.path.join(top, f)\n\n if pathname == self.upgrade_marker_path:\n continue\n\n pathinfo = PathManager(pathname)\n LOG.info("Checking %s", pathinfo)\n if pathinfo.is_dir:\n # Always chown the directories\n pathinfo.chown(self.target_uid, self.target_gid)\n self._walk(pathname)\n elif self.id_change:\n # Only chown files if it\'s an upgrade and the file is owned by\n # the host nova uid/gid\n pathinfo.chown(\n self.target_uid if pathinfo.uid == self.previous_uid\n else pathinfo.uid,\n self.target_gid if pathinfo.gid == self.previous_gid\n else pathinfo.gid\n )\n\n def run(self):\n LOG.info(\'Applying nova statedir ownership\')\n LOG.info(\'Target ownership for %s: %d:%d\',\n self.statedir,\n self.target_uid,\n self.target_gid)\n\n pathinfo = PathManager(self.statedir)\n LOG.info("Checking %s", pathinfo)\n pathinfo.chown(self.target_uid, self.target_gid)\n\n self._walk(self.statedir)\n\n if self.upgrade:\n LOG.info(\'Removing upgrade_marker %s\',\n self.upgrade_marker_path)\n os.unlink(self.upgrade_marker_path)\n\n LOG.info(\'Nova statedir ownership complete\')\n\nif __name__ == \'__main__\':\n NovaStatedirOwnershipManager(\'/var/lib/nova\').run()\n', 'mode': u'0700'}, 'key': u'nova_statedir_ownership.py'}) => {"changed": false, "item": {"key": "nova_statedir_ownership.py", "value": {"content": "#!/usr/bin/env python\n#\n# Copyright 2018 Red Hat Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may\n# not use this file except in compliance with the License. You may obtain\n# a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations\n# under the License.\nfrom __future__ import print_function\nimport logging\nimport os\nimport pwd\nimport stat\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\nLOG = logging.getLogger('nova_statedir')\n\n\nclass PathManager(object):\n \"\"\"Helper class to manipulate ownership of a given path\"\"\"\n def __init__(self, path):\n self.path = path\n self._update()\n\n def _update(self):\n statinfo = os.stat(self.path)\n self.is_dir = stat.S_ISDIR(statinfo.st_mode)\n self.uid = statinfo.st_uid\n self.gid = statinfo.st_gid\n\n def __str__(self):\n return \"uid: {} gid: {} path: {}{}\".format(\n self.uid,\n self.gid,\n self.path,\n '/' if self.is_dir else ''\n )\n\n def has_owner(self, uid, gid):\n return self.uid == uid and self.gid == gid\n\n def has_either(self, uid, gid):\n return self.uid == uid or self.gid == gid\n\n def chown(self, uid, gid):\n target_uid = -1\n target_gid = -1\n if self.uid != uid:\n target_uid = uid\n if self.gid != gid:\n target_gid = gid\n if (target_uid, target_gid) != (-1, -1):\n LOG.info('Changing ownership of %s from %d:%d to %d:%d',\n self.path,\n self.uid,\n self.gid,\n self.uid if target_uid == -1 else target_uid,\n self.gid if target_gid == -1 else target_gid)\n os.chown(self.path, target_uid, target_gid)\n self._update()\n else:\n LOG.info('Ownership of %s already %d:%d',\n self.path,\n uid,\n gid)\n\n\nclass NovaStatedirOwnershipManager(object):\n \"\"\"Class to manipulate the ownership of the nova statedir (/var/lib/nova).\n\n The nova uid/gid differ on the host and container images. An upgrade\n that switches from host systemd services to docker requires a change in\n ownership. Previously this was a naive recursive chown, however this\n causes issues if nova instance are shared via an NFS mount: any open\n filehandles in qemu/libvirt fail with an I/O error (LP1778465).\n\n Instead the upgrade/FFU ansible tasks now lay down a marker file when\n stopping and disabling the host systemd services. We use this file to\n determine the host nova uid/gid. We then walk the tree and update any\n files that have the host uid/gid to the docker nova uid/gid. As files\n owned by root/qemu etc... are ignored this avoids the issues with open\n filehandles. The marker is removed once the tree has been walked.\n\n For subsequent runs, or for a new deployment, we simply ensure that the\n docker nova user/group owns all directories. This is required as the\n directories are created with root ownership in host_prep_tasks (the\n docker nova uid/gid is not known in this context).\n \"\"\"\n def __init__(self, statedir, upgrade_marker='upgrade_marker',\n nova_user='nova'):\n self.statedir = statedir\n self.nova_user = nova_user\n\n self.upgrade_marker_path = os.path.join(statedir, upgrade_marker)\n self.upgrade = os.path.exists(self.upgrade_marker_path)\n\n self.target_uid, self.target_gid = self._get_nova_ids()\n self.previous_uid, self.previous_gid = self._get_previous_nova_ids()\n self.id_change = (self.target_uid, self.target_gid) != \\\n (self.previous_uid, self.previous_gid)\n\n def _get_nova_ids(self):\n nova_uid, nova_gid = pwd.getpwnam(self.nova_user)[2:4]\n return nova_uid, nova_gid\n\n def _get_previous_nova_ids(self):\n if self.upgrade:\n statinfo = os.stat(self.upgrade_marker_path)\n return statinfo.st_uid, statinfo.st_gid\n else:\n return self._get_nova_ids()\n\n def _walk(self, top):\n for f in os.listdir(top):\n pathname = os.path.join(top, f)\n\n if pathname == self.upgrade_marker_path:\n continue\n\n pathinfo = PathManager(pathname)\n LOG.info(\"Checking %s\", pathinfo)\n if pathinfo.is_dir:\n # Always chown the directories\n pathinfo.chown(self.target_uid, self.target_gid)\n self._walk(pathname)\n elif self.id_change:\n # Only chown files if it's an upgrade and the file is owned by\n # the host nova uid/gid\n pathinfo.chown(\n self.target_uid if pathinfo.uid == self.previous_uid\n else pathinfo.uid,\n self.target_gid if pathinfo.gid == self.previous_gid\n else pathinfo.gid\n )\n\n def run(self):\n LOG.info('Applying nova statedir ownership')\n LOG.info('Target ownership for %s: %d:%d',\n self.statedir,\n self.target_uid,\n self.target_gid)\n\n pathinfo = PathManager(self.statedir)\n LOG.info(\"Checking %s\", pathinfo)\n pathinfo.chown(self.target_uid, self.target_gid)\n\n self._walk(self.statedir)\n\n if self.upgrade:\n LOG.info('Removing upgrade_marker %s',\n self.upgrade_marker_path)\n os.unlink(self.upgrade_marker_path)\n\n LOG.info('Nova statedir ownership complete')\n\nif __name__ == '__main__':\n NovaStatedirOwnershipManager('/var/lib/nova').run()\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == "default" {print $4}\')\nif [ "$DEFID" ]; then\n echo "(cellv2) Updating default cell_v2 cell $DEFID"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default"\nelse\n echo "(cellv2) Creating default cell_v2 cell"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 create_cell --name=default"\nfi\n', 'mode': u'0700'}, 'key': u'nova_api_ensure_default_cell.sh'}) => {"changed": false, "item": {"key": "nova_api_ensure_default_cell.sh", "value": {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} > >TASK [Set docker_config_default fact] ****************************************** >Friday 21 September 2018 08:38:26 -0400 (0:00:00.130) 0:21:48.942 ****** >skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > >TASK [Set docker_startup_configs_with_default fact] **************************** >Friday 21 September 2018 08:38:26 -0400 (0:00:00.154) 0:21:49.097 ****** >skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > >TASK [Write docker-container-startup-configs] ********************************** >Friday 21 September 2018 08:38:26 -0400 (0:00:00.103) 0:21:49.201 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write per-step docker-container-startup-configs] ************************* >Friday 21 September 2018 08:38:26 -0400 (0:00:00.149) 0:21:49.350 ****** >skipping: [compute-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'nova_statedir_owner': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1', 'command': u'/docker-config-scripts/nova_statedir_ownership.py', 'user': u'root', 'volumes': [u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/docker-config-scripts/:/docker-config-scripts/'], 'detach': False, 'privileged': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_libvirt': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/var/log/containers/libvirt:/var/log/libvirt', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro', u'/var/lib/vhost_sockets:/var/lib/vhost_sockets', u'/sys/fs/selinux:/sys/fs/selinux'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_virtlogd': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/etc/libvirt/qemu:/etc/libvirt/qemu:ro', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu'], 'net': u'host', 'privileged': True, 'restart': u'always'}}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_statedir_owner": {"command": "/docker-config-scripts/nova_statedir_ownership.py", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/lib/nova:/var/lib/nova:shared", "/var/lib/docker-config-scripts/:/docker-config-scripts/"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'cinder_volume_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_image_tag': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_data_ownership': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'command': [u'chown', u'-R', u'mysql:', u'/var/lib/mysql'], 'user': u'root', 'volumes': [u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'redis_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_bootstrap': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'DB_MAX_TIMEOUT=60', u'DB_CLUSTERCHECK_PASSWORD=01uMEtrcy1XQLgnZ0spBcEeFG', u'DB_ROOT_PASSWORD=VmByi3iDWE'], 'command': [u'bash', u'-ec', u'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'${DB_CLUSTERCHECK_PASSWORD}\';"\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" shutdown'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'haproxy_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'cinder_backup_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_bootstrap': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'RABBITMQ_CLUSTER_COOKIE=bo2CgGlbFlVu6tTAeUPw'], 'volumes': [u'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro', u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/var/lib/rabbitmq:/var/lib/rabbitmq'], 'net': u'host', 'privileged': False}, 'memcached': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=01uMEtrcy1XQLgnZ0spBcEeFG", "DB_ROOT_PASSWORD=VmByi3iDWE"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=bo2CgGlbFlVu6tTAeUPw"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item={'value': {}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'nova_placement': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd', u'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'restart': u'always'}, 'swift_rsync_fix': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'sed -i "/pid file/d" /var/lib/kolla/config_files/src/etc/rsyncd.conf'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:rw'], 'net': u'host', 'detach': False}, 'nova_db_sync': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'heat_engine_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_copy_rings': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1', 'detach': False, 'command': [u'/bin/bash', u'-c', u'cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw', u'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro']}, 'nova_api_ensure_default_cell': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro'], 'net': u'host', 'detach': False}, 'keystone_cron': {'start_order': 4, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/bin/bash', u'-c', u'/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'panko_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/panko/etc/panko:/etc/panko:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_backup_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'nova_api_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'keystone_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/usr/bin/bootstrap_host_exec', u'keystone', u'/usr/local/bin/kolla_start'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'detach': False, 'privileged': False}, 'ceilometer_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'start_order': 0, 'volumes': [u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'user': u'root'}, 'keystone': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1', 'command': u'/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'cinder_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_api', u"su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'nova_api_map_cell0': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'glance_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': u"/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'detach': False, 'privileged': False}, 'neutron_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1', 'command': [u'/usr/bin/bootstrap_host_exec', u'neutron_api', u'neutron-db-manage', u'upgrade', u'heads'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro', u'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'sahara_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'detach': False, 'privileged': False}, 'keystone_bootstrap': {'action': u'exec', 'start_order': 3, 'command': [u'keystone', u'/usr/bin/bootstrap_host_exec', u'keystone', u'keystone-manage', u'bootstrap', u'--bootstrap-password', u'wIdMrXYZVQy05wYJArw8Vja2H'], 'user': u'root'}, 'horizon': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'ENABLE_IRONIC=yes', u'ENABLE_MANILA=yes', u'ENABLE_HEAT=yes', u'ENABLE_MISTRAL=yes', u'ENABLE_OCTAVIA=yes', u'ENABLE_SAHARA=yes', u'ENABLE_CLOUDKITTY=no', u'ENABLE_FREEZER=no', u'ENABLE_FWAAS=no', u'ENABLE_KARBOR=no', u'ENABLE_DESIGNATE=no', u'ENABLE_MAGNUM=no', u'ENABLE_MURANO=no', u'ENABLE_NEUTRON_LBAAS=no', u'ENABLE_SEARCHLIGHT=no', u'ENABLE_SENLIN=no', u'ENABLE_SOLUM=no', u'ENABLE_TACKER=no', u'ENABLE_TROVE=no', u'ENABLE_WATCHER=no', u'ENABLE_ZAQAR=no', u'ENABLE_ZUN=no'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/www/:/var/www/:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_setup_srv': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1', 'command': [u'chown', u'-R', u'swift:', u'/srv/node'], 'user': u'root', 'volumes': [u'/srv/node:/srv/node']}}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-20.1", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "wIdMrXYZVQy05wYJArw8Vja2H"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_rsync_fix": {"command": ["/bin/bash", "-c", "sed -i \"/pid file/d\" /var/lib/kolla/config_files/src/etc/rsyncd.conf"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:rw"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1", "user": "root", "volumes": ["/srv/node:/srv/node"]}}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'ceilometer_agent_compute': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/run/libvirt:/var/run/libvirt:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_libvirt_init_secret': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '8fedf068-bd95-11e8-ba69-5254006eda59' --base64 'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw=='"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt'], 'detach': False, 'privileged': False}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_migration_target': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ssh/:/host-ssh/:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_compute': {'ipc': u'host', 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/dev:/dev', u'/lib/modules:/lib/modules:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/libvirt:/var/lib/libvirt', u'/sys/class/net:/sys/class/net', u'/sys/bus/pci:/sys/bus/pci'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '8fedf068-bd95-11e8-ba69-5254006eda59' --base64 'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item={'value': {'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}}, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'gnocchi_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/log/gnocchi'], 'user': u'root', 'volumes': [u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd']}, 'mysql_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/mysql:/var/lib/mysql:rw'], 'net': u'host', 'detach': False}, 'gnocchi_init_lib': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/lib/gnocchi'], 'user': u'root', 'volumes': [u'/var/lib/gnocchi:/var/lib/gnocchi']}, 'cinder_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'user': u'root'}, 'create_dnsmasq_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-20.1', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'panko_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R panko:panko /var/log/panko'], 'user': u'root', 'volumes': [u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd']}, 'redis_init_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'config_volume': u'redis_init_bundle', 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_scheduler_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'user': u'root'}, 'glance_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R glance:glance /var/log/glance'], 'privileged': False, 'volumes': [u'/var/log/containers/glance:/var/log/glance'], 'user': u'root'}, 'clustercheck': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'restart': u'always'}, 'haproxy_init_bundle': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro', u'/etc/sysconfig:/etc/sysconfig:rw', u'/usr/libexec/iptables:/usr/libexec/iptables:ro', u'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False, 'privileged': True}, 'neutron_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R neutron:neutron /var/log/neutron'], 'privileged': False, 'volumes': [u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd'], 'user': u'root'}, 'mysql_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'config_volume': u'mysql', 'command': [u'/usr/bin/bootstrap_host_exec', u'mysql', u'if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo "galera-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'rabbitmq_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/bin/true:/bin/epmd'], 'net': u'host', 'detach': False}, 'nova_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd'], 'user': u'root'}, 'haproxy_restart_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1', 'config_volume': u'haproxy', 'command': [u'/usr/bin/bootstrap_host_exec', u'haproxy', u'if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo "haproxy-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'create_keepalived_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-20.1', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::l3_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'rabbitmq_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1', 'config_volume': u'rabbitmq', 'command': [u'/usr/bin/bootstrap_host_exec', u'rabbitmq', u'if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'horizon_fix_perms': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'user': u'root', 'volumes': [u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard']}, 'aodh_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R aodh:aodh /var/log/aodh'], 'user': u'root', 'volumes': [u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd']}, 'nova_metadata_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova'], 'user': u'root'}, 'redis_restart_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1', 'config_volume': u'redis', 'command': [u'/usr/bin/bootstrap_host_exec', u'redis', u'if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo "redis-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'heat_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R heat:heat /var/log/heat'], 'user': u'root', 'volumes': [u'/var/log/containers/heat:/var/log/heat']}, 'nova_placement_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'start_order': 1, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd'], 'user': u'root'}, 'keystone_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R keystone:keystone /var/log/keystone'], 'start_order': 1, 'volumes': [u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd'], 'user': u'root'}}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-20.1", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-20.1", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'cinder_volume_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'gnocchi_api': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'gnocchi_statsd': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_backup', u'if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo "openstack-cinder-backup restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_metricd': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_discover_hosts': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro'], 'net': u'host', 'detach': False}, 'ceilometer_gnocchi_upgrade': {'start_order': 99, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1', 'command': [u'/usr/bin/bootstrap_host_exec', u'ceilometer_agent_central', u"su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 30; done; exit 1'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_volume', u'if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'cinder_backup_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'gnocchi_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro'], 'net': u'host', 'detach': False, 'privileged': False}}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 30; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1", "net": "host", "privileged": false, "start_order": 99, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", "net": "host", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'swift_container_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_evaluator': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'/run:/run'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'cinder_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_proxy': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/run:/run', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'restart': u'always'}, 'neutron_dhcp': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/ce >rts/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', u'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_metadata_agent': {'start_order': 10, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'ceilometer_agent_central': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'keystone_refresh': {'action': u'exec', 'start_order': 1, 'command': [u'keystone', u'pkill', u'--signal', u'USR1', u'httpd'], 'user': u'root'}, 'swift_account_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_notifier': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_consoleauth': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'glance_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_reaper': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'ceilometer_agent_notification': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_vnc_proxy': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_rsync': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'aodh_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_metadata': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_l3_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro', u'/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro', u'/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_conductor': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'sahara_api': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_container_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_expirer': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'heat_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'panko_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_listener': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_api': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'heat_api_cfn': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} > >TASK [Create /var/lib/kolla/config_files directory] **************************** >Friday 21 September 2018 08:38:27 -0400 (0:00:00.624) 0:21:49.975 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write kolla config json files] ******************************************* >Friday 21 September 2018 08:38:27 -0400 (0:00:00.114) 0:21:50.090 ****** >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': u'/var/lib/kolla/config_files/iscsid.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/libvirtd', 'permissions': [{'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/nova_libvirt.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_libvirt.json", "value": {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ssh/', 'owner': u'root', 'perm': u'0600', 'source': u'/host-ssh/ssh_host_*_key'}], 'command': u'/usr/sbin/sshd -D -p 2022'}, 'key': u'/var/lib/kolla/config_files/nova-migration-target.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova-migration-target.json", "value": {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf'}, 'key': u'/var/lib/kolla/config_files/nova_virtlogd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_virtlogd.json", "value": {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/nova-compute ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/nova_compute.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_compute.json", "value": {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_compute.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/keystone.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/keystone.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/lib/cinder', 'recurse': True}, {'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_backup.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_backup.json", "value": {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/swift_proxy_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-auditor /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_auditor.json", "value": {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-replicator /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_replicator.json", "value": {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-notifier', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_notifier.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_notifier.json", "value": {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-scheduler ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_scheduler.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_scheduler.json", "value": {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/certs/neutron.crt'}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/private/neutron.key'}]}, 'key': u'/var/lib/kolla/config_files/neutron_dhcp.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_dhcp.json", "value": {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg', 'permissions': [{'owner': u'haproxy:haproxy', 'path': u'/var/lib/haproxy', 'recurse': True}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/certs/haproxy/*', 'optional': True, 'perm': u'0600'}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/private/haproxy/*', 'optional': True, 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/haproxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/haproxy.json", "value": {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_db_sync.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "value": {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-reaper /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_reaper.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_reaper.json", "value": {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/sahara-engine.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/sahara-engine.json", "value": {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'redis:redis', 'path': u'/var/run/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/lib/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/log/redis', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/redis.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/redis.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-novncproxy --web /usr/share/novnc/ ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}, {'owner': u'root:nova', 'path': u'/etc/pki/tls/private/novnc_proxy.key'}]}, 'key': u'/var/lib/kolla/config_files/nova_vnc_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "value": {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "root:nova", "path": "/etc/pki/tls/private/novnc_proxy.key"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf', 'permissions': [{'owner': u'glance:glance', 'path': u'/var/lib/glance', 'recurse': True}, {'owner': u'glance:glance', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/glance_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/glance_api.json", "value": {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-auditor /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_auditor.json", "value": {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-panko/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log', 'permissions': [{'owner': u'root:ceilometer', 'path': u'/etc/panko', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_notification.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "value": {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_expirer.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_expirer.json", "value": {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_central.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf'}, 'key': u'/var/lib/kolla/config_files/swift_rsync.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_rsync.json", "value": {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-server /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_server.json", "value": {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_proxy.json", "value": {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-updater /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_updater.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_updater.json", "value": {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/xinetd -dontfork'}, 'key': u'/var/lib/kolla/config_files/clustercheck.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/clustercheck.json", "value": {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'mysql:mysql', 'path': u'/var/log/mysql', 'recurse': True}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/certs/mysql.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/private/mysql.key', 'optional': True, 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/mysql.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/mysql.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_placement.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_placement.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/sahara-api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/sahara-api.json", "value": {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'keystone:keystone', 'path': u'/var/log/keystone', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/keystone_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/keystone_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/neutron_server_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-replicator /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_replicator.json", "value": {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-conductor ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_conductor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_conductor.json", "value": {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api_cfn.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api_cfn.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-api-metadata ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_metadata.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_metadata.json", "value": {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'rabbitmq:rabbitmq', 'path': u'/var/lib/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/var/log/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/certs/rabbitmq.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/private/rabbitmq.key', 'optional': True, 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/rabbitmq.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/rabbitmq.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-consoleauth ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_consoleauth.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_consoleauth.json", "value": {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-updater /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_updater.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_updater.json", "value": {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_api.json", "value": {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_scheduler.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_scheduler.json", "value": {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-metricd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_metricd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_metricd.json", "value": {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_metadata_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "value": {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-replicator /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_replicator.json", "value": {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_engine.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_engine.json", "value": {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-server /etc/swift/object-server.conf', 'permissions': [{'owner': u'swift:swift', 'path': u'/var/cache/swift', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/swift_object_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_server.json", "value": {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'stunnel /etc/stunnel/stunnel.conf', 'permissions': [{'owner': u'root:root', 'path': u'/etc/pki/tls/certs/redis.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'root:root', 'path': u'/etc/pki/tls/private/redis.key', 'optional': True, 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/redis_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/redis_tls_proxy.json", "value": {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"optional": true, "owner": "root:root", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "root:root", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_volume.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_volume.json", "value": {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'panko:panko', 'path': u'/var/log/panko', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/panko_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/panko_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-auditor /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_auditor.json", "value": {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_l3_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_l3_agent.json", "value": {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-listener', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_listener.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_listener.json", "value": {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-server /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_server.json", "value": {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-evaluator', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_evaluator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_evaluator.json", "value": {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/glance_api_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': u'/var/lib/kolla/config_files/iscsid.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-statsd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_statsd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_statsd.json", "value": {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'apache:apache', 'path': u'/var/log/horizon/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/etc/openstack-dashboard/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/', 'recurse': False}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/', 'recurse': False}]}, 'key': u'/var/lib/kolla/config_files/horizon.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/horizon.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}}, "skip_reason": "Conditional result was False"} > >TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >Friday 21 September 2018 08:38:28 -0400 (0:00:00.794) 0:21:50.884 ****** > [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >TASK [Write docker-puppet-tasks json files] ************************************ >Friday 21 September 2018 08:38:28 -0400 (0:00:00.105) 0:21:50.990 ****** >skipping: [controller-0] => (item={'value': [{'puppet_tags': u'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain', 'config_volume': u'keystone_init_tasks', 'step_config': u'include ::tripleo::profile::base::keystone', 'config_image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1'}], 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': [{'puppet_tags': u'cinder_config,cinder_type,file,concat,file_line', 'config_volume': u'cinder_init_tasks', 'step_config': u'include ::tripleo::profile::base::cinder::api', 'config_image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1', 'volumes': [u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro']}], 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": [{"config_image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", "config_volume": "cinder_init_tasks", "puppet_tags": "cinder_config,cinder_type,file,concat,file_line", "step_config": "include ::tripleo::profile::base::cinder::api", "volumes": ["/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro"]}]}, "skip_reason": "Conditional result was False"} > >TASK [Set host puppet debugging fact string] *********************************** >Friday 21 September 2018 08:38:28 -0400 (0:00:00.113) 0:21:51.103 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write the config_step hieradata] ***************************************** >Friday 21 September 2018 08:38:28 -0400 (0:00:00.110) 0:21:51.213 ****** >changed: [controller-0] => {"changed": true, "checksum": "62439dd24dde40c90e7a39f6a1b31cc6061fe59b", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "d1a4fc06e2525150450e67007bfcc8f3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537533508.85-223537408669163/source", "state": "file", "uid": 0} >changed: [compute-0] => {"changed": true, "checksum": "62439dd24dde40c90e7a39f6a1b31cc6061fe59b", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "d1a4fc06e2525150450e67007bfcc8f3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537533508.9-85040383204084/source", "state": "file", "uid": 0} >changed: [ceph-0] => {"changed": true, "checksum": "62439dd24dde40c90e7a39f6a1b31cc6061fe59b", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "d1a4fc06e2525150450e67007bfcc8f3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537533508.93-227357600892322/source", "state": "file", "uid": 0} > >TASK [Run puppet host configuration for step 3] ******************************** >Friday 21 September 2018 08:38:29 -0400 (0:00:00.775) 0:21:51.989 ****** >changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} > >changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} > >TASK [Debug output for task which failed: Run puppet host configuration for step 3] *** >Friday 21 September 2018 08:38:44 -0400 (0:00:15.079) 0:22:07.068 ****** >ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.00 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seltype: seltype changed 'etc_t' to 'system_conf_t'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seltype: seltype changed 'etc_t' to 'system_conf_t'", > "Notice: Applied catalog in 3.62 seconds", > "Changes:", > " Total: 4", > "Events:", > " Success: 4", > "Resources:", > " Total: 216", > " Corrective change: 3", > " Out of sync: 4", > " Changed: 4", > "Time:", > " Filebucket: 0.00", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " File line: 0.00", > " Cron: 0.00", > " User: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.00", > " Augeas: 0.02", > " Firewall: 0.08", > " File: 0.15", > " Service: 0.18", > " Pcmk property: 0.40", > " Pcmk resource default: 0.41", > " Package: 0.44", > " Exec: 0.93", > " Last run: 1537533524", > " Config retrieval: 3.52", > " Total: 6.14", > "Version:", > " Config: 1537533517", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.85 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 1.22 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Corrective change: 1", > " Total: 140", > " Out of sync: 2", > " Changed: 2", > "Time:", > " Filebucket: 0.00", > " Concat file: 0.00", > " Anchor: 0.00", > " Schedule: 0.00", > " Cron: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.01", > " Firewall: 0.01", > " Augeas: 0.01", > " File: 0.09", > " Service: 0.13", > " Exec: 0.21", > " Package: 0.24", > " Last run: 1537533520", > " Config retrieval: 2.16", > " Total: 2.87", > " Concat fragment: 0.00", > "Version:", > " Config: 1537533516", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 1.98 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 1.38 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Corrective change: 1", > " Total: 134", > " Out of sync: 2", > " Changed: 2", > "Time:", > " Concat file: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Package manifest: 0.00", > " Sysctl: 0.01", > " Sysctl runtime: 0.01", > " Firewall: 0.01", > " Augeas: 0.02", > " File: 0.04", > " Service: 0.11", > " Exec: 0.21", > " Package: 0.24", > " Last run: 1537533520", > " Config retrieval: 2.32", > " Total: 2.96", > " Filebucket: 0.00", > " Concat fragment: 0.00", > "Version:", > " Config: 1537533516", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} > >TASK [Run docker-puppet tasks (generate config) during step 3] ***************** >Friday 21 September 2018 08:38:44 -0400 (0:00:00.151) 0:22:07.220 ****** >skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > >TASK [Debug output for task which failed: Run docker-puppet tasks (generate config) during step 3] *** >Friday 21 September 2018 08:38:44 -0400 (0:00:00.102) 0:22:07.323 ****** >skipping: [controller-0] => {} >skipping: [compute-0] => {} >skipping: [ceph-0] => {} > >TASK [Start containers for step 3] ********************************************* >Friday 21 September 2018 08:38:44 -0400 (0:00:00.098) 0:22:07.422 ****** >ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > >ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > > >TASK [Debug output for task which failed: Start containers for step 3] ********* >Friday 21 September 2018 08:40:02 -0400 (0:01:17.649) 0:23:25.071 ****** >ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-libvirt ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-libvirt", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "86a0e618a180: Already exists", > "bf16b9ed859d: Pulling fs layer", > "bf16b9ed859d: Verifying Checksum", > "bf16b9ed859d: Download complete", > "bf16b9ed859d: Pull complete", > "Digest: sha256:2fb8166f16814e3ebda7c7cd9d5ecb731a05706b59d90c13d9cf7d4bcd589560", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1", > "", > "stderr: ", > "stdout: \u001b[0;32mInfo: Loading facts\u001b[0m", > "\u001b[0;32mInfo: Loading facts\u001b[0m", > "\u001b[mNotice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend\u001b[0m", > "\u001b[mNotice: Compiled catalog for compute-0.localdomain in environment production in 1.43 seconds\u001b[0m", > "\u001b[0;32mInfo: Applying configuration version '1537533549'\u001b[0m", > "\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]/Vs_bridge[br-ex]/ensure: created\u001b[0m", > "\u001b[0;32mInfo: Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]: Unscheduling all events on Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]\u001b[0m", > "\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]/Vs_bridge[br-isolated]/external_ids: external_ids changed 'PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory,PMD: net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5),PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory,PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)' to 'bridge-id=br-isolated'\u001b[0m", > "\u001b[0;32mInfo: Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]: Unscheduling all events on Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]\u001b[0m", > "\u001b[0;32mInfo: Creating state file /var/lib/puppet/state/state.yaml\u001b[0m", > "\u001b[mNotice: Applied catalog in 0.23 seconds\u001b[0m", > "stderr: PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5)", > "PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)", > "PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "\u001b[1;33mWarning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found\u001b[0m", > "\u001b[1;33mWarning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)\u001b[0m", > "\u001b[1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 208]:[\"unknown\", 1]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')\u001b[0m", > "stdout: INFO:nova_statedir:Applying nova statedir ownership", > "INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436", > "INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/", > "INFO:nova_statedir:Changing ownership of /var/lib/nova from 0:0 to 42436:42436", > "INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/instances/", > "INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 0:0 to 42436:42436", > "INFO:nova_statedir:Nova statedir ownership complete", > "stdout: 39aadfdd325a0002e6933647a2f6261c370687ebabd92c2d8b3444e23bb16e56", > "stdout: f8bb4f179f121b2d4f085b2cb078f15ab781f13620121c611ca0efc3b74cdb05", > "stdout: 947390e2fbe38f57f0f6e0862cbdac2f96d61b8b41a662492d5ac9355815c91c" > ] >} >ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-notification ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-notification", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "86a0e618a180: Already exists", > "dfa58d50e0a3: Already exists", > "39327dc96373: Already exists", > "462bb934ba0c: Pulling fs layer", > "462bb934ba0c: Verifying Checksum", > "462bb934ba0c: Download complete", > "462bb934ba0c: Pull complete", > "Digest: sha256:1c054e3c52896025029d1db9d50fc1f03a19812e486eee3c06bfb78d662fc6a2", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-20.1", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-swift-account ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-swift-account", > "d006a62af35a: Already exists", > "1907d7909371: Pulling fs layer", > "1907d7909371: Verifying Checksum", > "1907d7909371: Download complete", > "1907d7909371: Pull complete", > "Digest: sha256:1aaa76c70313abc1a57abf178fd69b5510fe7c57a3e48ff8b9139c83c8c3490b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-swift-object ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-swift-object", > "db9f20c69fa2: Pulling fs layer", > "db9f20c69fa2: Verifying Checksum", > "db9f20c69fa2: Download complete", > "db9f20c69fa2: Pull complete", > "Digest: sha256:db271d9f3edddf62f0b2a652ceac7567548f02bb1a943fbd9c63a154f28afa5c", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", > "stdout: ", > "stdout: 699488a653e7ff53a6a474cb2aa8d110a47e29478d9f7e6ca24ac1e567e9fa47", > "stdout: 2018-09-21 12:38:49.865 11 WARNING oslo_config.cfg [-] Deprecated: Option \"db_backend\" from group \"DEFAULT\" is deprecated. Use option \"backend\" from group \"database\".\u001b[00m", > "2018-09-21 12:38:49.954 11 INFO migrate.versioning.api [-] 70 -> 71... \u001b[00m", > "2018-09-21 12:38:50.111 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-09-21 12:38:50.112 11 INFO migrate.versioning.api [-] 71 -> 72... \u001b[00m", > "2018-09-21 12:38:50.147 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-09-21 12:38:50.147 11 INFO migrate.versioning.api [-] 72 -> 73... \u001b[00m", > "2018-09-21 12:38:50.189 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-09-21 12:38:50.189 11 INFO migrate.versioning.api [-] 73 -> 74... \u001b[00m", > "2018-09-21 12:38:50.195 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-09-21 12:38:50.195 11 INFO migrate.versioning.api [-] 74 -> 75... \u001b[00m", > "2018-09-21 12:38:50.201 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-09-21 12:38:50.201 11 INFO migrate.versioning.api [-] 75 -> 76... \u001b[00m", > "2018-09-21 12:38:50.207 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-09-21 12:38:50.207 11 INFO migrate.versioning.api [-] 76 -> 77... \u001b[00m", > "2018-09-21 12:38:50.213 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-09-21 12:38:50.214 11 INFO migrate.versioning.api [-] 77 -> 78... \u001b[00m", > "2018-09-21 12:38:50.220 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-09-21 12:38:50.221 11 INFO migrate.versioning.api [-] 78 -> 79... \u001b[00m", > "2018-09-21 12:38:50.447 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-09-21 12:38:50.447 11 INFO migrate.versioning.api [-] 79 -> 80... \u001b[00m", > "2018-09-21 12:38:50.498 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-09-21 12:38:50.498 11 INFO migrate.versioning.api [-] 80 -> 81... \u001b[00m", > "2018-09-21 12:38:50.505 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-09-21 12:38:50.505 11 INFO migrate.versioning.api [-] 81 -> 82... \u001b[00m", > "2018-09-21 12:38:50.511 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-09-21 12:38:50.511 11 INFO migrate.versioning.api [-] 82 -> 83... \u001b[00m", > "2018-09-21 12:38:50.517 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-09-21 12:38:50.517 11 INFO migrate.versioning.api [-] 83 -> 84... \u001b[00m", > "2018-09-21 12:38:50.523 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-09-21 12:38:50.523 11 INFO migrate.versioning.api [-] 84 -> 85... \u001b[00m", > "2018-09-21 12:38:50.529 11 INFO migrate.versioning.api [-] done\u001b[00m", > "2018-09-21 12:38:50.530 11 INFO migrate.versioning.api [-] 85 -> 86... \u001b[00m", > "2018-09-21 12:38:50.580 11 INFO migrate.versioning.api [-] done\u001b[00m", > "stdout: \u001b[0;32mInfo: Loading facts\u001b[0m", > "\u001b[0;32mInfo: Loading facts\u001b[0m", > "\u001b[mNotice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend\u001b[0m", > "\u001b[mNotice: Compiled catalog for controller-0.localdomain in environment production in 1.64 seconds\u001b[0m", > "\u001b[0;32mInfo: Applying configuration version '1537533539'\u001b[0m", > "\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]/Vs_bridge[br-ex]/external_ids: external_ids changed 'PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory,PMD: net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5),PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory,PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)' to 'bridge-id=br-ex'\u001b[0m", > "\u001b[0;32mInfo: Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]: Unscheduling all events on Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]\u001b[0m", > "\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]/Vs_bridge[br-isolated]/external_ids: external_ids changed 'PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory,PMD: net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5),PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory,PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)' to 'bridge-id=br-isolated'\u001b[0m", > "\u001b[0;32mInfo: Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]: Unscheduling all events on Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]\u001b[0m", > "\u001b[0;32mInfo: Creating state file /var/lib/puppet/state/state.yaml\u001b[0m", > "\u001b[mNotice: Applied catalog in 0.26 seconds\u001b[0m", > "stderr: PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5)", > "PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)", > "PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "\u001b[1;33mWarning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found\u001b[0m", > "\u001b[1;33mWarning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)\u001b[0m", > "\u001b[1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 208]:[\"unknown\", 1]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')\u001b[0m", > "stderr: Deprecated: Option \"logdir\" from group \"DEFAULT\" is deprecated. Use option \"log-dir\" from group \"DEFAULT\".", > "stdout: Upgraded database to: rocky_expand02, current revision(s): rocky_expand02", > "Database migration is up to date. No migration needed.", > "Upgraded database to: rocky_contract02, current revision(s): rocky_contract02", > "Database is synced successfully.", > "stderr: + sudo -E kolla_set_configs", > "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", > "INFO:__main__:Validating config file", > "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", > "INFO:__main__:Copying service configuration files", > "INFO:__main__:Deleting /etc/glance/glance-api.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/glance/glance-api.conf to /etc/glance/glance-api.conf", > "INFO:__main__:Deleting /etc/glance/glance-cache.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/glance/glance-cache.conf to /etc/glance/glance-cache.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/my.cnf.d/tripleo.cnf to /etc/my.cnf.d/tripleo.cnf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.conf to /etc/ceph/ceph.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.admin.keyring to /etc/ceph/ceph.client.admin.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.mon.keyring to /etc/ceph/ceph.mon.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.mgr.controller-0.keyring to /etc/ceph/ceph.mgr.controller-0.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.manila.keyring to /etc/ceph/ceph.client.manila.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.radosgw.keyring to /etc/ceph/ceph.client.radosgw.keyring", > "INFO:__main__:Writing out command to execute", > "INFO:__main__:Setting permission for /var/lib/glance", > "INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring", > "++ cat /run_command", > "+ CMD='/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf'", > "+ ARGS=", > "+ [[ ! -n '' ]]", > "+ . kolla_extend_start", > "++ [[ ! -d /var/log/kolla/glance ]]", > "++ mkdir -p /var/log/kolla/glance", > "+++ stat -c %a /var/log/kolla/glance", > "++ [[ 2755 != \\7\\5\\5 ]]", > "++ chmod 755 /var/log/kolla/glance", > "++ . /usr/local/bin/kolla_glance_extend_start", > "+++ [[ -n 0 ]]", > "+++ glance-manage db_sync", > "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:1352: OsloDBDeprecationWarning: EngineFacade is deprecated; please use oslo_db.sqlalchemy.enginefacade", > " expire_on_commit=expire_on_commit, _conf=conf)", > "INFO [alembic.runtime.migration] Context impl MySQLImpl.", > "INFO [alembic.runtime.migration] Will assume non-transactional DDL.", > "INFO [alembic.runtime.migration] Running upgrade -> liberty, liberty initial", > "INFO [alembic.runtime.migration] Running upgrade liberty -> mitaka01, add index on created_at and updated_at columns of 'images' table", > "INFO [alembic.runtime.migration] Running upgrade mitaka01 -> mitaka02, update metadef os_nova_server", > "INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_expand01, add visibility to images", > "INFO [alembic.runtime.migration] Running upgrade ocata_expand01 -> pike_expand01, empty expand for symmetry with pike_contract01", > "INFO [alembic.runtime.migration] Running upgrade pike_expand01 -> queens_expand01", > "INFO [alembic.runtime.migration] Running upgrade queens_expand01 -> rocky_expand01, add os_hidden column to images table", > "INFO [alembic.runtime.migration] Running upgrade rocky_expand01 -> rocky_expand02, add os_hash_algo and os_hash_value columns to images table", > "INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_contract01, remove is_public from images", > "INFO [alembic.runtime.migration] Running upgrade ocata_contract01 -> pike_contract01, drop glare artifacts tables", > "INFO [alembic.runtime.migration] Running upgrade pike_contract01 -> queens_contract01", > "INFO [alembic.runtime.migration] Running upgrade queens_contract01 -> rocky_contract01", > "INFO [alembic.runtime.migration] Running upgrade rocky_contract01 -> rocky_contract02", > "+++ glance-manage db_load_metadefs", > "+++ exit 0", > "stdout: '/swift_ringbuilder/etc/swift/account.ring.gz' -> '/etc/swift/account.ring.gz'", > "'/swift_ringbuilder/etc/swift/container.ring.gz' -> '/etc/swift/container.ring.gz'", > "'/swift_ringbuilder/etc/swift/object.ring.gz' -> '/etc/swift/object.ring.gz'", > "'/swift_ringbuilder/etc/swift/account.builder' -> '/etc/swift/account.builder'", > "'/swift_ringbuilder/etc/swift/container.builder' -> '/etc/swift/container.builder'", > "'/swift_ringbuilder/etc/swift/object.builder' -> '/etc/swift/object.builder'", > "'/swift_ringbuilder/etc/swift/backups' -> '/etc/swift/backups'", > "'/swift_ringbuilder/etc/swift/backups/1537532589.object.builder' -> '/etc/swift/backups/1537532589.object.builder'", > "'/swift_ringbuilder/etc/swift/backups/1537532590.account.builder' -> '/etc/swift/backups/1537532590.account.builder'", > "'/swift_ringbuilder/etc/swift/backups/1537532590.container.builder' -> '/etc/swift/backups/1537532590.container.builder'", > "'/swift_ringbuilder/etc/swift/backups/1537532592.account.builder' -> '/etc/swift/backups/1537532592.account.builder'", > "'/swift_ringbuilder/etc/swift/backups/1537532592.account.ring.gz' -> '/etc/swift/backups/1537532592.account.ring.gz'", > "'/swift_ringbuilder/etc/swift/backups/1537532592.object.builder' -> '/etc/swift/backups/1537532592.object.builder'", > "'/swift_ringbuilder/etc/swift/backups/1537532592.object.ring.gz' -> '/etc/swift/backups/1537532592.object.ring.gz'", > "'/swift_ringbuilder/etc/swift/backups/1537532593.container.builder' -> '/etc/swift/backups/1537532593.container.builder'", > "'/swift_ringbuilder/etc/swift/backups/1537532593.container.ring.gz' -> '/etc/swift/backups/1537532593.container.ring.gz'", > "stderr: INFO [alembic.runtime.migration] Context impl MySQLImpl.", > "INFO [alembic.runtime.migration] Running upgrade -> 001, Icehouse release", > "INFO [alembic.runtime.migration] Running upgrade 001 -> 002, placeholder", > "INFO [alembic.runtime.migration] Running upgrade 002 -> 003, placeholder", > "INFO [alembic.runtime.migration] Running upgrade 003 -> 004, placeholder", > "INFO [alembic.runtime.migration] Running upgrade 004 -> 005, placeholder", > "INFO [alembic.runtime.migration] Running upgrade 005 -> 006, placeholder", > "INFO [alembic.runtime.migration] Running upgrade 006 -> 007, convert clusters.status_description to LongText", > "INFO [alembic.runtime.migration] Running upgrade 007 -> 008, add security_groups field to node groups", > "INFO [alembic.runtime.migration] Running upgrade 008 -> 009, add rollback info to cluster", > "INFO [alembic.runtime.migration] Running upgrade 009 -> 010, add auto_security_groups flag to node group", > "INFO [alembic.runtime.migration] Running upgrade 010 -> 011, add Sahara settings info to cluster", > "INFO [alembic.runtime.migration] Running upgrade 011 -> 012, add availability_zone field to node groups", > "INFO [alembic.runtime.migration] Running upgrade 012 -> 013, add volumes_availability_zone field to node groups", > "INFO [alembic.runtime.migration] Running upgrade 013 -> 014, add_volume_type", > "INFO [alembic.runtime.migration] Running upgrade 014 -> 015, add_events_objects", > "INFO [alembic.runtime.migration] Running upgrade 015 -> 016, Add is_proxy_gateway", > "INFO [alembic.runtime.migration] Running upgrade 016 -> 017, drop progress in JobExecution", > "INFO [alembic.runtime.migration] Running upgrade 017 -> 018, add volume_local_to_instance flag", > "INFO [alembic.runtime.migration] Running upgrade 018 -> 019, Add is_default field for cluster and node_group templates", > "INFO [alembic.runtime.migration] Running upgrade 019 -> 020, remove redandunt progress ops", > "INFO [alembic.runtime.migration] Running upgrade 020 -> 021, Add data_source_urls to job_executions to support placeholders", > "INFO [alembic.runtime.migration] Running upgrade 021 -> 022, add_job_interface", > "INFO [alembic.runtime.migration] Running upgrade 022 -> 023, add_use_autoconfig", > "INFO [alembic.runtime.migration] Running upgrade 023 -> 024, manila_shares", > "INFO [alembic.runtime.migration] Running upgrade 024 -> 025, Increase internal_ip and management_ip column size to work with IPv6", > "INFO [alembic.runtime.migration] Running upgrade 025 -> 026, add is_public and is_protected flags", > "INFO [alembic.runtime.migration] Running upgrade 026 -> 027, Rename oozie_job_id", > "INFO [alembic.runtime.migration] Running upgrade 027 -> 028, add_storage_devices_number", > "INFO [alembic.runtime.migration] Running upgrade 028 -> 029, set is_protected on is_default", > "INFO [alembic.runtime.migration] Running upgrade 029 -> 030, health-check", > "INFO [alembic.runtime.migration] Running upgrade 030 -> 031, added_plugins_table", > "INFO [alembic.runtime.migration] Running upgrade 031 -> 032, 032_add_domain_name", > "INFO [alembic.runtime.migration] Running upgrade 032 -> 033, 033_add anti_affinity_ratio field to cluster", > "INFO [alembic.runtime.migration] Running upgrade 033 -> 034, Add boot_from_volumes field for node_groups and related classes", > "stdout: 42ba9277599d230c753e8f699b353240c0a56dd4f061d6622e3291cf7c736172", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.d/10-keystone_wsgi_admin.conf to /etc/httpd/conf.d/10-keystone_wsgi_admin.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.d/10-keystone_wsgi_main.conf to /etc/httpd/conf.d/10-keystone_wsgi_main.conf", > "INFO:__main__:Deleting /etc/httpd/conf.d/ssl.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.d/ssl.conf to /etc/httpd/conf.d/ssl.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/access_compat.load to /etc/httpd/conf.modules.d/access_compat.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/actions.load to /etc/httpd/conf.modules.d/actions.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/alias.conf to /etc/httpd/conf.modules.d/alias.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/alias.load to /etc/httpd/conf.modules.d/alias.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/auth_basic.load to /etc/httpd/conf.modules.d/auth_basic.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/auth_digest.load to /etc/httpd/conf.modules.d/auth_digest.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_anon.load to /etc/httpd/conf.modules.d/authn_anon.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_core.load to /etc/httpd/conf.modules.d/authn_core.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_dbm.load to /etc/httpd/conf.modules.d/authn_dbm.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_file.load to /etc/httpd/conf.modules.d/authn_file.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_core.load to /etc/httpd/conf.modules.d/authz_core.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_dbm.load to /etc/httpd/conf.modules.d/authz_dbm.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_groupfile.load to /etc/httpd/conf.modules.d/authz_groupfile.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_host.load to /etc/httpd/conf.modules.d/authz_host.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_owner.load to /etc/httpd/conf.modules.d/authz_owner.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_user.load to /etc/httpd/conf.modules.d/authz_user.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/autoindex.conf to /etc/httpd/conf.modules.d/autoindex.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/autoindex.load to /etc/httpd/conf.modules.d/autoindex.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/cache.load to /etc/httpd/conf.modules.d/cache.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/cgi.load to /etc/httpd/conf.modules.d/cgi.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dav.load to /etc/httpd/conf.modules.d/dav.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dav_fs.conf to /etc/httpd/conf.modules.d/dav_fs.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dav_fs.load to /etc/httpd/conf.modules.d/dav_fs.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/deflate.conf to /etc/httpd/conf.modules.d/deflate.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/deflate.load to /etc/httpd/conf.modules.d/deflate.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dir.conf to /etc/httpd/conf.modules.d/dir.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dir.load to /etc/httpd/conf.modules.d/dir.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/env.load to /etc/httpd/conf.modules.d/env.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/expires.load to /etc/httpd/conf.modules.d/expires.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/ext_filter.load to /etc/httpd/conf.modules.d/ext_filter.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/filter.load to /etc/httpd/conf.modules.d/filter.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/include.load to /etc/httpd/conf.modules.d/include.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/log_config.load to /etc/httpd/conf.modules.d/log_config.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/logio.load to /etc/httpd/conf.modules.d/logio.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime.conf to /etc/httpd/conf.modules.d/mime.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime.load to /etc/httpd/conf.modules.d/mime.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime_magic.conf to /etc/httpd/conf.modules.d/mime_magic.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime_magic.load to /etc/httpd/conf.modules.d/mime_magic.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/negotiation.conf to /etc/httpd/conf.modules.d/negotiation.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/negotiation.load to /etc/httpd/conf.modules.d/negotiation.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/prefork.conf to /etc/httpd/conf.modules.d/prefork.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/prefork.load to /etc/httpd/conf.modules.d/prefork.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/rewrite.load to /etc/httpd/conf.modules.d/rewrite.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/setenvif.conf to /etc/httpd/conf.modules.d/setenvif.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/setenvif.load to /etc/httpd/conf.modules.d/setenvif.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/socache_shmcb.load to /etc/httpd/conf.modules.d/socache_shmcb.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/speling.load to /etc/httpd/conf.modules.d/speling.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/ssl.load to /etc/httpd/conf.modules.d/ssl.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/status.conf to /etc/httpd/conf.modules.d/status.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/status.load to /etc/httpd/conf.modules.d/status.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/substitute.load to /etc/httpd/conf.modules.d/substitute.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/suexec.load to /etc/httpd/conf.modules.d/suexec.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/systemd.load to /etc/httpd/conf.modules.d/systemd.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/unixd.load to /etc/httpd/conf.modules.d/unixd.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/usertrack.load to /etc/httpd/conf.modules.d/usertrack.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/version.load to /etc/httpd/conf.modules.d/version.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/vhost_alias.load to /etc/httpd/conf.modules.d/vhost_alias.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/wsgi.conf to /etc/httpd/conf.modules.d/wsgi.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/wsgi.load to /etc/httpd/conf.modules.d/wsgi.load", > "INFO:__main__:Deleting /etc/httpd/conf/httpd.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf/httpd.conf to /etc/httpd/conf/httpd.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf/ports.conf to /etc/httpd/conf/ports.conf", > "INFO:__main__:Creating directory /etc/keystone/credential-keys", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/keystone/credential-keys/0 to /etc/keystone/credential-keys/0", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/keystone/credential-keys/1 to /etc/keystone/credential-keys/1", > "INFO:__main__:Creating directory /etc/keystone/fernet-keys", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/keystone/fernet-keys/0 to /etc/keystone/fernet-keys/0", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/keystone/fernet-keys/1 to /etc/keystone/fernet-keys/1", > "INFO:__main__:Deleting /etc/keystone/keystone.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/keystone/keystone.conf to /etc/keystone/keystone.conf", > "INFO:__main__:Creating directory /etc/systemd/system/httpd.service.d", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/systemd/system/httpd.service.d/httpd.conf to /etc/systemd/system/httpd.service.d/httpd.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/var/spool/cron/keystone to /var/spool/cron/keystone", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/var/www/cgi-bin/keystone/keystone-admin to /var/www/cgi-bin/keystone/keystone-admin", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/var/www/cgi-bin/keystone/keystone-public to /var/www/cgi-bin/keystone/keystone-public", > "+ CMD='/usr/sbin/httpd -DFOREGROUND'", > "++ [[ rhel =~ debian|ubuntu ]]", > "++ rm -rf /var/run/httpd/htcacheclean /run/httpd/htcacheclean '/tmp/httpd*'", > "++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone", > "++ [[ ! -d /var/log/kolla/keystone ]]", > "++ mkdir -p /var/log/kolla/keystone", > "+++ stat -c %U:%G /var/log/kolla/keystone", > "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]", > "++ chown keystone:kolla /var/log/kolla/keystone", > "++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'", > "++ touch /var/log/kolla/keystone/keystone.log", > "+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log", > "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]", > "++ chown keystone:keystone /var/log/kolla/keystone/keystone.log", > "+++ stat -c %a /var/log/kolla/keystone", > "++ chmod 755 /var/log/kolla/keystone", > "++ EXTRA_KEYSTONE_MANAGE_ARGS=", > "++ [[ -n '' ]]", > "++ [[ -n 0 ]]", > "++ sudo -H -u keystone keystone-manage db_sync", > "++ exit 0", > "stdout: 17224949fc28032166d1cb6f38d0ec62636d647afc9ae9608fb28a24d2d20978", > "stdout: Running upgrade for neutron ...", > "OK", > "Running upgrade for networking-bgpvpn ...", > "Running upgrade for networking-l2gw ...", > "Running upgrade for networking-odl ...", > "Running upgrade for neutron-fwaas ...", > "Running upgrade for neutron-lbaas ...", > "Running upgrade for vmware-nsx ...", > "INFO [alembic.runtime.migration] Running upgrade -> kilo", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225", > "INFO [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151", > "INFO [alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf", > "INFO [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee", > "INFO [alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f", > "INFO [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773", > "INFO [alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592", > "INFO [alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7", > "INFO [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79", > "INFO [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051", > "INFO [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136", > "INFO [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59", > "INFO [alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 59cb5b6cf4d", > "INFO [alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 13cfb89f881a", > "INFO [alembic.runtime.migration] Running upgrade 13cfb89f881a -> 32e5974ada25", > "INFO [alembic.runtime.migration] Running upgrade 32e5974ada25 -> ec7fcfbf72ee", > "INFO [alembic.runtime.migration] Running upgrade ec7fcfbf72ee -> dce3ec7a25c9", > "INFO [alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> c3a73f615e4", > "INFO [alembic.runtime.migration] Running upgrade c3a73f615e4 -> 659bf3d90664", > "INFO [alembic.runtime.migration] Running upgrade 659bf3d90664 -> 1df244e556f5", > "INFO [alembic.runtime.migration] Running upgrade 1df244e556f5 -> 19f26505c74f", > "INFO [alembic.runtime.migration] Running upgrade 19f26505c74f -> 15be73214821", > "INFO [alembic.runtime.migration] Running upgrade 15be73214821 -> b4caf27aae4", > "INFO [alembic.runtime.migration] Running upgrade b4caf27aae4 -> 15e43b934f81", > "INFO [alembic.runtime.migration] Running upgrade 15e43b934f81 -> 31ed664953e6", > "INFO [alembic.runtime.migration] Running upgrade 31ed664953e6 -> 2f9e956e7532", > "INFO [alembic.runtime.migration] Running upgrade 2f9e956e7532 -> 3894bccad37f", > "INFO [alembic.runtime.migration] Running upgrade 3894bccad37f -> 0e66c5227a8a", > "INFO [alembic.runtime.migration] Running upgrade 0e66c5227a8a -> 45f8dd33480b", > "INFO [alembic.runtime.migration] Running upgrade 45f8dd33480b -> 5abc0278ca73", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99", > "INFO [alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada", > "INFO [alembic.runtime.migration] Running upgrade 4ffceebfada -> 5498d17be016", > "INFO [alembic.runtime.migration] Running upgrade 5498d17be016 -> 2a16083502f3", > "INFO [alembic.runtime.migration] Running upgrade 2a16083502f3 -> 2e5352a0ad4d", > "INFO [alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> 11926bcfe72d", > "INFO [alembic.runtime.migration] Running upgrade 11926bcfe72d -> 4af11ca47297", > "INFO [alembic.runtime.migration] Running upgrade 4af11ca47297 -> 1b294093239c", > "INFO [alembic.runtime.migration] Running upgrade 1b294093239c -> 8a6d8bdae39", > "INFO [alembic.runtime.migration] Running upgrade 8a6d8bdae39 -> 2b4c2465d44b", > "INFO [alembic.runtime.migration] Running upgrade 2b4c2465d44b -> e3278ee65050", > "INFO [alembic.runtime.migration] Running upgrade e3278ee65050 -> c6c112992c9", > "INFO [alembic.runtime.migration] Running upgrade c6c112992c9 -> 5ffceebfada", > "INFO [alembic.runtime.migration] Running upgrade 5ffceebfada -> 4ffceebfcdc", > "INFO [alembic.runtime.migration] Running upgrade 4ffceebfcdc -> 7bbb25278f53", > "INFO [alembic.runtime.migration] Running upgrade 7bbb25278f53 -> 89ab9a816d70", > "INFO [alembic.runtime.migration] Running upgrade 5abc0278ca73 -> d3435b514502", > "INFO [alembic.runtime.migration] Running upgrade d3435b514502 -> 30107ab6a3ee", > "INFO [alembic.runtime.migration] Running upgrade 30107ab6a3ee -> c415aab1c048", > "INFO [alembic.runtime.migration] Running upgrade c415aab1c048 -> a963b38d82f4", > "INFO [alembic.runtime.migration] Running upgrade a963b38d82f4 -> 3d0e74aa7d37", > "INFO [alembic.runtime.migration] Running upgrade 3d0e74aa7d37 -> 030a959ceafa", > "INFO [alembic.runtime.migration] Running upgrade 030a959ceafa -> a5648cfeeadf", > "INFO [alembic.runtime.migration] Running upgrade a5648cfeeadf -> 0f5bef0f87d4", > "INFO [alembic.runtime.migration] Running upgrade 0f5bef0f87d4 -> 67daae611b6e", > "INFO [alembic.runtime.migration] Running upgrade 89ab9a816d70 -> c879c5e1ee90", > "INFO [alembic.runtime.migration] Running upgrade c879c5e1ee90 -> 8fd3918ef6f4", > "INFO [alembic.runtime.migration] Running upgrade 8fd3918ef6f4 -> 4bcd4df1f426", > "INFO [alembic.runtime.migration] Running upgrade 4bcd4df1f426 -> b67e765a3524", > "INFO [alembic.runtime.migration] Running upgrade 67daae611b6e -> 6b461a21bcfc", > "INFO [alembic.runtime.migration] Running upgrade 6b461a21bcfc -> 5cd92597d11d", > "INFO [alembic.runtime.migration] Running upgrade 5cd92597d11d -> 929c968efe70", > "INFO [alembic.runtime.migration] Running upgrade 929c968efe70 -> a9c43481023c", > "INFO [alembic.runtime.migration] Running upgrade a9c43481023c -> 804a3c76314c", > "INFO [alembic.runtime.migration] Running upgrade 804a3c76314c -> 2b42d90729da", > "INFO [alembic.runtime.migration] Running upgrade 2b42d90729da -> 62c781cb6192", > "INFO [alembic.runtime.migration] Running upgrade 62c781cb6192 -> c8c222d42aa9", > "INFO [alembic.runtime.migration] Running upgrade c8c222d42aa9 -> 349b6fd605a6", > "INFO [alembic.runtime.migration] Running upgrade 349b6fd605a6 -> 7d32f979895f", > "INFO [alembic.runtime.migration] Running upgrade 7d32f979895f -> 594422d373ee", > "INFO [alembic.runtime.migration] Running upgrade 594422d373ee -> 61663558142c", > "INFO [alembic.runtime.migration] Running upgrade 61663558142c -> 867d39095bf4, port forwarding", > "INFO [alembic.runtime.migration] Running upgrade b67e765a3524 -> a84ccf28f06a", > "INFO [alembic.runtime.migration] Running upgrade a84ccf28f06a -> 7d9d8eeec6ad", > "INFO [alembic.runtime.migration] Running upgrade 7d9d8eeec6ad -> a8b517cff8ab", > "INFO [alembic.runtime.migration] Running upgrade a8b517cff8ab -> 3b935b28e7a0", > "INFO [alembic.runtime.migration] Running upgrade 3b935b28e7a0 -> b12a3ef66e62", > "INFO [alembic.runtime.migration] Running upgrade b12a3ef66e62 -> 97c25b0d2353", > "INFO [alembic.runtime.migration] Running upgrade 97c25b0d2353 -> 2e0d7a8a1586", > "INFO [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d", > "INFO [alembic.runtime.migration] Running upgrade -> start_networking_bgpvpn, start networking_bgpvpn chain", > "Revision ID: start_networking_bgpvpn", > "Revises: None", > "Create Date: 2015-10-01 18:04:17.265514", > "INFO [alembic.runtime.migration] Running upgrade start_networking_bgpvpn -> 17d9fd4fddee, expand initial", > "Revision ID: 17d9fd4fddee", > "Revises: start_networking_bgpvpn", > "Create Date: 2015-10-01 17:35:11.000000", > "INFO [alembic.runtime.migration] Running upgrade 17d9fd4fddee -> 3600132c6147, Add router association table", > "INFO [alembic.runtime.migration] Running upgrade 3600132c6147 -> 0ab4049986b8, add indexes to tenant_id", > "INFO [alembic.runtime.migration] Running upgrade 0ab4049986b8 -> 9a6664f3b8d4, Add tables for port associations", > "INFO [alembic.runtime.migration] Running upgrade 9a6664f3b8d4 -> 39411aacf9b8, add vni to bgpvpn table", > "INFO [alembic.runtime.migration] Running upgrade 39411aacf9b8 -> 4610803bdf0d, Add 'extra-routes' to router association table", > "INFO [alembic.runtime.migration] Running upgrade 4610803bdf0d -> 666c706fea3b, Add local_pref to bgpvpns table", > "INFO [alembic.runtime.migration] Running upgrade 666c706fea3b -> 7a9482036ecd, Add standard attributes", > "INFO [alembic.runtime.migration] Running upgrade start_networking_bgpvpn -> 180baa4183e0, contract initial", > "Revision ID: 180baa4183e0", > "INFO [alembic.runtime.migration] Running upgrade 180baa4183e0 -> 23ce05e0a19f, rename tenant to project", > "INFO [alembic.runtime.migration] Running upgrade 23ce05e0a19f -> 9d7f1ae5fa56, Add standard FK and constraints, and defs for existing objects", > "INFO [alembic.runtime.migration] Running upgrade -> start_networking_l2gw, start networking-l2gw chain", > "INFO [alembic.runtime.migration] Running upgrade start_networking_l2gw -> 54c9c8fe22bf, DB_Models_for_OVSDB_Hardware_VTEP_Schema", > "INFO [alembic.runtime.migration] Running upgrade 54c9c8fe22bf -> 42438454c556, l2gateway_models", > "INFO [alembic.runtime.migration] Running upgrade 42438454c556 -> kilo, kilo", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 60019185aa99, Initial no-op Liberty expand rule.", > "INFO [alembic.runtime.migration] Running upgrade 60019185aa99 -> 49ce408ac349, add indexes to tenant_id", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 79919185aa99, Initial no-op Liberty contract rule.", > "INFO [alembic.runtime.migration] Running upgrade 79919185aa99 -> 2f533f7705dd, rename tenant to project", > "INFO [alembic.runtime.migration] Running upgrade -> b89a299e19f9, Initial odl db, branchpoint", > "INFO [alembic.runtime.migration] Running upgrade b89a299e19f9 -> 247501328046, Start of odl expand branch", > "INFO [alembic.runtime.migration] Running upgrade 247501328046 -> 37e242787ae5, OpenDaylight Neutron mechanism driver refactor", > "INFO [alembic.runtime.migration] Running upgrade 37e242787ae5 -> 703dbf02afde, Add journal maintenance table", > "INFO [alembic.runtime.migration] Running upgrade 703dbf02afde -> 3d560427d776, add sequence number to journal", > "INFO [alembic.runtime.migration] Running upgrade b89a299e19f9 -> 383acb0d38a0, Start of odl contract branch", > "INFO [alembic.runtime.migration] Running upgrade 383acb0d38a0 -> fa0c536252a5, update opendayligut journal", > "INFO [alembic.runtime.migration] Running upgrade fa0c536252a5 -> eccd865b7d3a, drop opendaylight_maintenance table", > "INFO [alembic.runtime.migration] Running upgrade eccd865b7d3a -> 7cbef5a56298, Drop created_at column", > "INFO [alembic.runtime.migration] Running upgrade 3d560427d776 -> 43af357fd638, Added version_id for optimistic locking", > "INFO [alembic.runtime.migration] Running upgrade 43af357fd638 -> 0472f56ff2fb, Add journal dependencies table", > "INFO [alembic.runtime.migration] Running upgrade 0472f56ff2fb -> 6f7dfb241354, create opendaylight_preiodic_task table", > "INFO [alembic.runtime.migration] Running upgrade -> start_neutron_fwaas, start neutron-fwaas chain", > "INFO [alembic.runtime.migration] Running upgrade start_neutron_fwaas -> 4202e3047e47, add_index_tenant_id", > "INFO [alembic.runtime.migration] Running upgrade 4202e3047e47 -> 540142f314f4, FWaaS router insertion", > "INFO [alembic.runtime.migration] Running upgrade 540142f314f4 -> 796c68dffbb, cisco_csr_fwaas", > "INFO [alembic.runtime.migration] Running upgrade 796c68dffbb -> kilo, kilo", > "INFO [alembic.runtime.migration] Running upgrade kilo -> c40fbb377ad, Initial Liberty no-op script.", > "INFO [alembic.runtime.migration] Running upgrade c40fbb377ad -> 4b47ea298795, add reject rule", > "INFO [alembic.runtime.migration] Running upgrade 4b47ea298795 -> d6a12e637e28, neutron-fwaas v2.0", > "INFO [alembic.runtime.migration] Running upgrade d6a12e637e28 -> 876782258a43, create_default_firewall_groups_table", > "INFO [alembic.runtime.migration] Running upgrade 876782258a43 -> f24e0d5e5bff, uniq_firewallgroupportassociation0port", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 67c8e8d61d5, Initial Liberty no-op script.", > "INFO [alembic.runtime.migration] Running upgrade 67c8e8d61d5 -> 458aa42b14b, fw_table_alter script to make <name> column case sensitive", > "INFO [alembic.runtime.migration] Running upgrade 458aa42b14b -> f83a0b2964d0, rename tenant to project", > "INFO [alembic.runtime.migration] Running upgrade f83a0b2964d0 -> fd38cd995cc0, change shared attribute for firewall resource", > "INFO [alembic.runtime.migration] Running upgrade -> start_neutron_lbaas, start neutron-lbaas chain", > "INFO [alembic.runtime.migration] Running upgrade start_neutron_lbaas -> lbaasv2, lbaas version 2 api", > "INFO [alembic.runtime.migration] Running upgrade lbaasv2 -> 4deef6d81931, add provisioning and operating statuses", > "INFO [alembic.runtime.migration] Running upgrade 4deef6d81931 -> 4b6d8d5310b8, add_index_tenant_id", > "INFO [alembic.runtime.migration] Running upgrade 4b6d8d5310b8 -> 364f9b6064f0, agentv2", > "INFO [alembic.runtime.migration] Running upgrade 364f9b6064f0 -> lbaasv2_tls, lbaasv2 TLS", > "INFO [alembic.runtime.migration] Running upgrade lbaasv2_tls -> 4ba00375f715, edge_driver", > "INFO [alembic.runtime.migration] Running upgrade 4ba00375f715 -> kilo, kilo", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 3345facd0452, Initial Liberty no-op expand script.", > "INFO [alembic.runtime.migration] Running upgrade 3345facd0452 -> 4a408dd491c2, Addition of Name column to lbaas_members and lbaas_healthmonitors table", > "INFO [alembic.runtime.migration] Running upgrade 4a408dd491c2 -> 3426acbc12de, Add flavor id", > "INFO [alembic.runtime.migration] Running upgrade 3426acbc12de -> 6aee0434f911, independent pools", > "INFO [alembic.runtime.migration] Running upgrade 6aee0434f911 -> 3543deab1547, add_l7_tables", > "INFO [alembic.runtime.migration] Running upgrade 3543deab1547 -> 62deca5010cd, Add tenant-id index for L7 tables", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 130ebfdef43, Initial Liberty no-op contract revision.", > "INFO [alembic.runtime.migration] Running upgrade 130ebfdef43 -> 4b4dc6d5d843, rename tenant to project", > "INFO [alembic.runtime.migration] Running upgrade 4b4dc6d5d843 -> e6417a8b114d, Drop v1 tables", > "INFO [alembic.runtime.migration] Running upgrade 62deca5010cd -> 844352f9fe6f, Add healthmonitor max retries down", > "INFO [alembic.runtime.migration] Running upgrade -> kilo, kilo", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 53a3254aa95e, Initial Liberty no-op expand script.", > "INFO [alembic.runtime.migration] Running upgrade 53a3254aa95e -> 28430956782d, nsxv3_security_groups", > "INFO [alembic.runtime.migration] Running upgrade 28430956782d -> 279b70ac3ae8, NSXv3 Add l2gwconnection table", > "INFO [alembic.runtime.migration] Running upgrade 279b70ac3ae8 -> 312211a5725f, nsxv_lbv2", > "INFO [alembic.runtime.migration] Running upgrade 312211a5725f -> 2af850eb3970, update nsxv tz binding type", > "INFO [alembic.runtime.migration] Running upgrade 2af850eb3970 -> 69fb78b33d41, NSXv add dns search domain to subnets", > "INFO [alembic.runtime.migration] Running upgrade 69fb78b33d41 -> 20483029f1ff, update nsx_v3 tz_network_bindings_binding_type", > "INFO [alembic.runtime.migration] Running upgrade 20483029f1ff -> 4c45bcadccf9, extend_secgroup_rule", > "INFO [alembic.runtime.migration] Running upgrade 4c45bcadccf9 -> 2c87aedb206f, nsxv_security_group_logging", > "INFO [alembic.runtime.migration] Running upgrade 2c87aedb206f -> 3e4dccfe6fb4, NSXv add dns search domain to subnets", > "INFO [alembic.runtime.migration] Running upgrade 3e4dccfe6fb4 -> 967462f585e1, add dvs_id column to neutron_nsx_network_mappings", > "INFO [alembic.runtime.migration] Running upgrade 967462f585e1 -> b7f41687cbad, nsxv3_qos_policy_mapping", > "INFO [alembic.runtime.migration] Running upgrade b7f41687cbad -> c288bb6a7252, NSXv add resource pool to the router bindings table", > "INFO [alembic.runtime.migration] Running upgrade c288bb6a7252 -> c644ec62c585, NSXv3 add nsx_service_bindings and nsx_dhcp_bindings tables", > "INFO [alembic.runtime.migration] Running upgrade c644ec62c585 -> 5e564e781d77, add nsx binding type", > "INFO [alembic.runtime.migration] Running upgrade 5e564e781d77 -> aede17d51d0f, add timestamp", > "INFO [alembic.runtime.migration] Running upgrade aede17d51d0f -> 7e46906f8997, lbaas foreignkeys", > "INFO [alembic.runtime.migration] Running upgrade 7e46906f8997 -> 86a55205337c, NSXv add availability zone to the router bindings table instead of", > "the resource pool column", > "INFO [alembic.runtime.migration] Running upgrade 86a55205337c -> 633514d94b93, Add support for TaaS", > "INFO [alembic.runtime.migration] Running upgrade 633514d94b93 -> 1b4eaffe4f31, NSX Adds a 'provider' attribute to security-group", > "INFO [alembic.runtime.migration] Running upgrade 1b4eaffe4f31 -> 6e6da8296c0e, Add support for IPAM in NSXv", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 393bf843b96, Initial Liberty no-op contract script.", > "INFO [alembic.runtime.migration] Running upgrade 393bf843b96 -> 3c88bdea3054, nsxv_vdr_dhcp_binding.py", > "INFO [alembic.runtime.migration] Running upgrade 3c88bdea3054 -> 5ed1ffbc0d2a, nsxv_security_group_logging", > "INFO [alembic.runtime.migration] Running upgrade 5ed1ffbc0d2a -> 081af0e396d7, nsxv3_secgroup_local_ip_prefix", > "INFO [alembic.runtime.migration] Running upgrade 081af0e396d7 -> dbe29d208ac6, NSXv add DHCP MTU to subnets", > "INFO [alembic.runtime.migration] Running upgrade dbe29d208ac6 -> d49ac91b560e, Support shared pools with NSXv LBaaSv2 driver", > "INFO [alembic.runtime.migration] Running upgrade d49ac91b560e -> 5c8f451290b7, nsxv_subnet_ipam rename to nsx_subnet_ipam", > "INFO [alembic.runtime.migration] Running upgrade 5c8f451290b7 -> 14a89ddf96e2, NSX Adds a 'availability_zone' attribute to internal-networks table", > "INFO [alembic.runtime.migration] Running upgrade 14a89ddf96e2 -> 8c0a81a07691, Update the primary key constraint of nsx_subnet_ipam", > "INFO [alembic.runtime.migration] Running upgrade 8c0a81a07691 -> 84ceffa27115, remove the foreign key constrain from nsxv3_qos_policy_mapping", > "INFO [alembic.runtime.migration] Running upgrade 84ceffa27115 -> a1be06050b41, update nsx binding types", > "INFO [alembic.runtime.migration] Running upgrade a1be06050b41 -> 717f7f63a219, nsxv3_lbaas_l7policy", > "INFO [alembic.runtime.migration] Running upgrade 6e6da8296c0e -> 7b5ec3caa9a4, Fix the availability zones default value in the router bindings table", > "INFO [alembic.runtime.migration] Running upgrade 7b5ec3caa9a4 -> e816d4fe9d4f, NSX Adds a 'policy' attribute to security-group", > "INFO [alembic.runtime.migration] Running upgrade e816d4fe9d4f -> dd9fe5a3a526, NSX Adds certificate table for client certificate management", > "INFO [alembic.runtime.migration] Running upgrade dd9fe5a3a526 -> 01a33f93f5fd, nsxv_lbv2_l7policy", > "INFO [alembic.runtime.migration] Running upgrade 01a33f93f5fd -> e4c503f4133f, Port vnic_type support", > "INFO [alembic.runtime.migration] Running upgrade e4c503f4133f -> 7c4704ad37df, Fix NSX Lbaas L7 policy table creation", > "INFO [alembic.runtime.migration] Running upgrade 7c4704ad37df -> 8699700cd95c, nsxv_bgp_speaker_mapping", > "INFO [alembic.runtime.migration] Running upgrade 8699700cd95c -> 53eb497903a4, Drop VDR DHCP bindings table", > "INFO [alembic.runtime.migration] Running upgrade 53eb497903a4 -> ea7a72ab9643", > "INFO [alembic.runtime.migration] Running upgrade ea7a72ab9643 -> 9799427fc0e1, nsx map project to plugin", > "INFO [alembic.runtime.migration] Running upgrade 9799427fc0e1 -> 0dbeda408e41, nsxv3_vpn_mapping", > "stdout: d8e5c91562e42b61652df7a99dbea9612e97a900a12b43049a8ae0699597a107", > "stdout: 490c78e7a5f7e57c871ab1580596f71856a6f45f43a2077155425cf0d171a756", > "stdout: be7f602bed43cf5a8dcda6c138e8ce2bb7542c3d7fa879103b2467b1c39b4850", > "stdout: (cellv2) Creating default cell_v2 cell", > "stdout: 552ca0b5997773171b8f7db9c87adb13e71843365ce8c413d50d2ef65f20ac0c", > "stdout: 159e3f56b66db47e51fe16f32116d6568e311ebf3bfc7d601f96a0a6efca0a70", > "stderr: /usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.')", > " result = self._query(query)", > "/usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release.')", > "stdout: 26eb191c17b1ff6b01c7a579da267954aef054a81e5de9735fb816a90076361a" > ] >} > >TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks3.json exists] ******** >Friday 21 September 2018 08:40:02 -0400 (0:00:00.231) 0:23:25.302 ****** >ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >ok: [controller-0] => {"changed": false, "stat": {"atime": 1537532467.4023862, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "7f972a464871ecaf99a8c646963e44a31a095a8a", "ctime": 1537532467.4063864, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 35651878, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0600", "mtime": 1537532467.2333863, "nlink": 1, "path": "/var/lib/docker-puppet/docker-puppet-tasks3.json", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 397, "uid": 0, "version": "1214663808", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} >ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} > >TASK [Run docker-puppet tasks (bootstrap tasks) for step 3] ******************** >Friday 21 September 2018 08:40:03 -0400 (0:00:00.331) 0:23:25.634 ****** >skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > > >TASK [Debug output for task which failed: Run docker-puppet tasks (bootstrap tasks) for step 3] *** >Friday 21 September 2018 08:42:48 -0400 (0:02:44.921) 0:26:10.555 ****** >ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-09-21 12:40:03,398 INFO: 91741 -- Running docker-puppet", > "2018-09-21 12:40:03,398 INFO: 91741 -- Service compilation completed.", > "2018-09-21 12:40:03,399 INFO: 91741 -- Starting multiprocess configuration steps. Using 8 processes.", > "2018-09-21 12:40:03,413 INFO: 91742 -- Starting configuration of keystone_init_tasks using image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", > "2018-09-21 12:40:03,415 INFO: 91742 -- Removing container: docker-puppet-keystone_init_tasks", > "2018-09-21 12:40:03,465 INFO: 91742 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", > "2018-09-21 12:42:47,827 INFO: 91742 -- Removing container: docker-puppet-keystone_init_tasks", > "2018-09-21 12:42:47,892 INFO: 91742 -- Finished processing puppet configs for keystone_init_tasks" > ] >} >skipping: [compute-0] => {} >skipping: [ceph-0] => {} > >PLAY [External deployment step 4] ********************************************** > >TASK [set blacklisted_hostnames] *********************************************** >Friday 21 September 2018 08:42:48 -0400 (0:00:00.128) 0:26:10.683 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [create ceph-ansible temp dirs] ******************************************* >Friday 21 September 2018 08:42:48 -0400 (0:00:00.039) 0:26:10.723 ****** >skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/group_vars) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/group_vars", "skip_reason": "Conditional result was False"} >skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/host_vars) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/host_vars", "skip_reason": "Conditional result was False"} >skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/fetch_dir) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/fetch_dir", "skip_reason": "Conditional result was False"} > >TASK [generate inventory] ****************************************************** >Friday 21 September 2018 08:42:48 -0400 (0:00:00.057) 0:26:10.780 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible group vars all] ***************************************** >Friday 21 September 2018 08:42:48 -0400 (0:00:00.041) 0:26:10.822 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate ceph-ansible group vars all] ************************************ >Friday 21 September 2018 08:42:48 -0400 (0:00:00.035) 0:26:10.858 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible extra vars] ********************************************* >Friday 21 September 2018 08:42:48 -0400 (0:00:00.035) 0:26:10.893 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate ceph-ansible extra vars] **************************************** >Friday 21 September 2018 08:42:48 -0400 (0:00:00.041) 0:26:10.935 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate nodes-uuid data file] ******************************************* >Friday 21 September 2018 08:42:48 -0400 (0:00:00.036) 0:26:10.972 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate nodes-uuid playbook] ******************************************** >Friday 21 September 2018 08:42:48 -0400 (0:00:00.037) 0:26:11.009 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [run nodes-uuid] ********************************************************** >Friday 21 September 2018 08:42:48 -0400 (0:00:00.041) 0:26:11.051 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible params from Heat] *************************************** >Friday 21 September 2018 08:42:48 -0400 (0:00:00.041) 0:26:11.092 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible playbooks] ********************************************** >Friday 21 September 2018 08:42:48 -0400 (0:00:00.044) 0:26:11.136 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible command] ************************************************ >Friday 21 September 2018 08:42:48 -0400 (0:00:00.055) 0:26:11.192 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [run ceph-ansible] ******************************************************** >Friday 21 September 2018 08:42:48 -0400 (0:00:00.035) 0:26:11.228 ****** >skipping: [undercloud] => (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": false, "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible group vars mgrs] **************************************** >Friday 21 September 2018 08:42:48 -0400 (0:00:00.050) 0:26:11.279 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate ceph-ansible group vars mgrs] *********************************** >Friday 21 September 2018 08:42:48 -0400 (0:00:00.046) 0:26:11.325 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible group vars mons] **************************************** >Friday 21 September 2018 08:42:48 -0400 (0:00:00.038) 0:26:11.363 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate ceph-ansible group vars mons] *********************************** >Friday 21 September 2018 08:42:48 -0400 (0:00:00.039) 0:26:11.403 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set_fact] **************************************************************** >Friday 21 September 2018 08:42:48 -0400 (0:00:00.038) 0:26:11.442 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Create temp file for prepare parameter] ********************************** >Friday 21 September 2018 08:42:48 -0400 (0:00:00.040) 0:26:11.482 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Create temp file for role data] ****************************************** >Friday 21 September 2018 08:42:49 -0400 (0:00:00.043) 0:26:11.526 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write ContainerImagePrepare parameter file] ****************************** >Friday 21 September 2018 08:42:49 -0400 (0:00:00.053) 0:26:11.580 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write role data file] **************************************************** >Friday 21 September 2018 08:42:49 -0400 (0:00:00.050) 0:26:11.630 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Run tripleo-container-image-prepare] ************************************* >Friday 21 September 2018 08:42:49 -0400 (0:00:00.063) 0:26:11.694 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Delete param file] ******************************************************* >Friday 21 September 2018 08:42:49 -0400 (0:00:00.053) 0:26:11.747 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Delete role file] ******************************************************** >Friday 21 September 2018 08:42:49 -0400 (0:00:00.057) 0:26:11.805 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible group vars clients] ************************************* >Friday 21 September 2018 08:42:49 -0400 (0:00:00.048) 0:26:11.853 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate ceph-ansible group vars clients] ******************************** >Friday 21 September 2018 08:42:49 -0400 (0:00:00.052) 0:26:11.905 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [set ceph-ansible group vars osds] **************************************** >Friday 21 September 2018 08:42:49 -0400 (0:00:00.066) 0:26:11.971 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [generate ceph-ansible group vars osds] *********************************** >Friday 21 September 2018 08:42:49 -0400 (0:00:00.050) 0:26:12.022 ****** >skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} > >PLAY [Overcloud deploy step tasks for 4] *************************************** > >PLAY [Overcloud common deploy step tasks 4] ************************************ > >TASK [Create /var/lib/tripleo-config directory] ******************************** >Friday 21 September 2018 08:42:49 -0400 (0:00:00.076) 0:26:12.099 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write the puppet step_config manifest] *********************************** >Friday 21 September 2018 08:42:49 -0400 (0:00:00.177) 0:26:12.276 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Create /var/lib/docker-puppet] ******************************************* >Friday 21 September 2018 08:42:49 -0400 (0:00:00.110) 0:26:12.387 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write docker-puppet.json file] ******************************************* >Friday 21 September 2018 08:42:50 -0400 (0:00:00.112) 0:26:12.499 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Create /var/lib/docker-config-scripts] *********************************** >Friday 21 September 2018 08:42:50 -0400 (0:00:00.156) 0:26:12.656 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >Friday 21 September 2018 08:42:50 -0400 (0:00:00.115) 0:26:12.772 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write docker config scripts] ********************************************* >Friday 21 September 2018 08:42:50 -0400 (0:00:00.122) 0:26:12.894 ****** >skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "(cellv2) Running cell_v2 host discovery"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr "," " "); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +"%s") + ${timeout} ))\necho "(cellv2) Waiting ${timeout} seconds for hosts to register"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo "(cellv2) compute node $host has registered"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in "${!discoverable_hosts[@]}"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo "(cellv2) compute node $host has not registered"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +"%s") ))\n if (( $finished == 1 )); then\n echo "(cellv2) All nodes registered"\n break\n elif (( $remaining <= 0 )); then\n echo "(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless"\n echo "(cellv2) Expected host list:" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\n echo "(cellv2) Detected host list:" $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\' | sort -u | tr \'\\n\', \' \')\n break\n else\n echo "(cellv2) Waiting ${remaining} seconds for hosts to register"\n sleep $loop_wait\n fi\ndone\necho "(cellv2) Running host discovery..."\nsu nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose"\n', 'mode': u'0700'}, 'key': u'nova_api_discover_hosts.sh'}) => {"changed": false, "item": {"key": "nova_api_discover_hosts.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "Check if secret already exists"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo "Failed to check secrets, check if Barbican in enabled and responding properly"\n exit $rc;\nfi\nif [ -z "$secret_href" ]; then\n echo "Create new secret"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type="application/octet-stream" --algorithm aes --bit-length 256 --mode ctr key -f value -c "Order href")\nfi\n', 'mode': u'0700'}, 'key': u'create_swift_secret.sh'}) => {"changed": false, "item": {"key": "create_swift_secret.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": false, "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho "retrieve key_id"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ "$secret_href" ]; then\n echo "set key_id in keymaster.conf"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c "Secret href")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo "no key, wait for $loop_wait and check again"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho "Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly"\nexit 1\n', 'mode': u'0700'}, 'key': u'set_swift_keymaster_key_id.sh'}) => {"changed": false, "item": {"key": "set_swift_keymaster_key_id.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-\'\'}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho "{\\"step\\": ${STEP}}" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e "${CONFIG}"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n', 'mode': u'0700'}, 'key': u'docker_puppet_apply.sh'}) => {"changed": false, "item": {"key": "docker_puppet_apply.sh", "value": {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == "default" {print $4}\')\nif [ "$DEFID" ]; then\n echo "(cellv2) Updating default cell_v2 cell $DEFID"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default"\nelse\n echo "(cellv2) Creating default cell_v2 cell"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 create_cell --name=default"\nfi\n', 'mode': u'0700'}, 'key': u'nova_api_ensure_default_cell.sh'}) => {"changed": false, "item": {"key": "nova_api_ensure_default_cell.sh", "value": {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": false, "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'content': u'#!/usr/bin/env python\n#\n# Copyright 2018 Red Hat Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the "License"); you may\n# not use this file except in compliance with the License. You may obtain\n# a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations\n# under the License.\nfrom __future__ import print_function\nimport logging\nimport os\nimport pwd\nimport stat\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\nLOG = logging.getLogger(\'nova_statedir\')\n\n\nclass PathManager(object):\n """Helper class to manipulate ownership of a given path"""\n def __init__(self, path):\n self.path = path\n self._update()\n\n def _update(self):\n statinfo = os.stat(self.path)\n self.is_dir = stat.S_ISDIR(statinfo.st_mode)\n self.uid = statinfo.st_uid\n self.gid = statinfo.st_gid\n\n def __str__(self):\n return "uid: {} gid: {} path: {}{}".format(\n self.uid,\n self.gid,\n self.path,\n \'/\' if self.is_dir else \'\'\n )\n\n def has_owner(self, uid, gid):\n return self.uid == uid and self.gid == gid\n\n def has_either(self, uid, gid):\n return self.uid == uid or self.gid == gid\n\n def chown(self, uid, gid):\n target_uid = -1\n target_gid = -1\n if self.uid != uid:\n target_uid = uid\n if self.gid != gid:\n target_gid = gid\n if (target_uid, target_gid) != (-1, -1):\n LOG.info(\'Changing ownership of %s from %d:%d to %d:%d\',\n self.path,\n self.uid,\n self.gid,\n self.uid if target_uid == -1 else target_uid,\n self.gid if target_gid == -1 else target_gid)\n os.chown(self.path, target_uid, target_gid)\n self._update()\n else:\n LOG.info(\'Ownership of %s already %d:%d\',\n self.path,\n uid,\n gid)\n\n\nclass NovaStatedirOwnershipManager(object):\n """Class to manipulate the ownership of the nova statedir (/var/lib/nova).\n\n The nova uid/gid differ on the host and container images. An upgrade\n that switches from host systemd services to docker requires a change in\n ownership. Previously this was a naive recursive chown, however this\n causes issues if nova instance are shared via an NFS mount: any open\n filehandles in qemu/libvirt fail with an I/O error (LP1778465).\n\n Instead the upgrade/FFU ansible tasks now lay down a marker file when\n stopping and disabling the host systemd services. We use this file to\n determine the host nova uid/gid. We then walk the tree and update any\n files that have the host uid/gid to the docker nova uid/gid. As files\n owned by root/qemu etc... are ignored this avoids the issues with open\n filehandles. The marker is removed once the tree has been walked.\n\n For subsequent runs, or for a new deployment, we simply ensure that the\n docker nova user/group owns all directories. This is required as the\n directories are created with root ownership in host_prep_tasks (the\n docker nova uid/gid is not known in this context).\n """\n def __init__(self, statedir, upgrade_marker=\'upgrade_marker\',\n nova_user=\'nova\'):\n self.statedir = statedir\n self.nova_user = nova_user\n\n self.upgrade_marker_path = os.path.join(statedir, upgrade_marker)\n self.upgrade = os.path.exists(self.upgrade_marker_path)\n\n self.target_uid, self.target_gid = self._get_nova_ids()\n self.previous_uid, self.previous_gid = self._get_previous_nova_ids()\n self.id_change = (self.target_uid, self.target_gid) != \\\n (self.previous_uid, self.previous_gid)\n\n def _get_nova_ids(self):\n nova_uid, nova_gid = pwd.getpwnam(self.nova_user)[2:4]\n return nova_uid, nova_gid\n\n def _get_previous_nova_ids(self):\n if self.upgrade:\n statinfo = os.stat(self.upgrade_marker_path)\n return statinfo.st_uid, statinfo.st_gid\n else:\n return self._get_nova_ids()\n\n def _walk(self, top):\n for f in os.listdir(top):\n pathname = os.path.join(top, f)\n\n if pathname == self.upgrade_marker_path:\n continue\n\n pathinfo = PathManager(pathname)\n LOG.info("Checking %s", pathinfo)\n if pathinfo.is_dir:\n # Always chown the directories\n pathinfo.chown(self.target_uid, self.target_gid)\n self._walk(pathname)\n elif self.id_change:\n # Only chown files if it\'s an upgrade and the file is owned by\n # the host nova uid/gid\n pathinfo.chown(\n self.target_uid if pathinfo.uid == self.previous_uid\n else pathinfo.uid,\n self.target_gid if pathinfo.gid == self.previous_gid\n else pathinfo.gid\n )\n\n def run(self):\n LOG.info(\'Applying nova statedir ownership\')\n LOG.info(\'Target ownership for %s: %d:%d\',\n self.statedir,\n self.target_uid,\n self.target_gid)\n\n pathinfo = PathManager(self.statedir)\n LOG.info("Checking %s", pathinfo)\n pathinfo.chown(self.target_uid, self.target_gid)\n\n self._walk(self.statedir)\n\n if self.upgrade:\n LOG.info(\'Removing upgrade_marker %s\',\n self.upgrade_marker_path)\n os.unlink(self.upgrade_marker_path)\n\n LOG.info(\'Nova statedir ownership complete\')\n\nif __name__ == \'__main__\':\n NovaStatedirOwnershipManager(\'/var/lib/nova\').run()\n', 'mode': u'0700'}, 'key': u'nova_statedir_ownership.py'}) => {"changed": false, "item": {"key": "nova_statedir_ownership.py", "value": {"content": "#!/usr/bin/env python\n#\n# Copyright 2018 Red Hat Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may\n# not use this file except in compliance with the License. You may obtain\n# a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations\n# under the License.\nfrom __future__ import print_function\nimport logging\nimport os\nimport pwd\nimport stat\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\nLOG = logging.getLogger('nova_statedir')\n\n\nclass PathManager(object):\n \"\"\"Helper class to manipulate ownership of a given path\"\"\"\n def __init__(self, path):\n self.path = path\n self._update()\n\n def _update(self):\n statinfo = os.stat(self.path)\n self.is_dir = stat.S_ISDIR(statinfo.st_mode)\n self.uid = statinfo.st_uid\n self.gid = statinfo.st_gid\n\n def __str__(self):\n return \"uid: {} gid: {} path: {}{}\".format(\n self.uid,\n self.gid,\n self.path,\n '/' if self.is_dir else ''\n )\n\n def has_owner(self, uid, gid):\n return self.uid == uid and self.gid == gid\n\n def has_either(self, uid, gid):\n return self.uid == uid or self.gid == gid\n\n def chown(self, uid, gid):\n target_uid = -1\n target_gid = -1\n if self.uid != uid:\n target_uid = uid\n if self.gid != gid:\n target_gid = gid\n if (target_uid, target_gid) != (-1, -1):\n LOG.info('Changing ownership of %s from %d:%d to %d:%d',\n self.path,\n self.uid,\n self.gid,\n self.uid if target_uid == -1 else target_uid,\n self.gid if target_gid == -1 else target_gid)\n os.chown(self.path, target_uid, target_gid)\n self._update()\n else:\n LOG.info('Ownership of %s already %d:%d',\n self.path,\n uid,\n gid)\n\n\nclass NovaStatedirOwnershipManager(object):\n \"\"\"Class to manipulate the ownership of the nova statedir (/var/lib/nova).\n\n The nova uid/gid differ on the host and container images. An upgrade\n that switches from host systemd services to docker requires a change in\n ownership. Previously this was a naive recursive chown, however this\n causes issues if nova instance are shared via an NFS mount: any open\n filehandles in qemu/libvirt fail with an I/O error (LP1778465).\n\n Instead the upgrade/FFU ansible tasks now lay down a marker file when\n stopping and disabling the host systemd services. We use this file to\n determine the host nova uid/gid. We then walk the tree and update any\n files that have the host uid/gid to the docker nova uid/gid. As files\n owned by root/qemu etc... are ignored this avoids the issues with open\n filehandles. The marker is removed once the tree has been walked.\n\n For subsequent runs, or for a new deployment, we simply ensure that the\n docker nova user/group owns all directories. This is required as the\n directories are created with root ownership in host_prep_tasks (the\n docker nova uid/gid is not known in this context).\n \"\"\"\n def __init__(self, statedir, upgrade_marker='upgrade_marker',\n nova_user='nova'):\n self.statedir = statedir\n self.nova_user = nova_user\n\n self.upgrade_marker_path = os.path.join(statedir, upgrade_marker)\n self.upgrade = os.path.exists(self.upgrade_marker_path)\n\n self.target_uid, self.target_gid = self._get_nova_ids()\n self.previous_uid, self.previous_gid = self._get_previous_nova_ids()\n self.id_change = (self.target_uid, self.target_gid) != \\\n (self.previous_uid, self.previous_gid)\n\n def _get_nova_ids(self):\n nova_uid, nova_gid = pwd.getpwnam(self.nova_user)[2:4]\n return nova_uid, nova_gid\n\n def _get_previous_nova_ids(self):\n if self.upgrade:\n statinfo = os.stat(self.upgrade_marker_path)\n return statinfo.st_uid, statinfo.st_gid\n else:\n return self._get_nova_ids()\n\n def _walk(self, top):\n for f in os.listdir(top):\n pathname = os.path.join(top, f)\n\n if pathname == self.upgrade_marker_path:\n continue\n\n pathinfo = PathManager(pathname)\n LOG.info(\"Checking %s\", pathinfo)\n if pathinfo.is_dir:\n # Always chown the directories\n pathinfo.chown(self.target_uid, self.target_gid)\n self._walk(pathname)\n elif self.id_change:\n # Only chown files if it's an upgrade and the file is owned by\n # the host nova uid/gid\n pathinfo.chown(\n self.target_uid if pathinfo.uid == self.previous_uid\n else pathinfo.uid,\n self.target_gid if pathinfo.gid == self.previous_gid\n else pathinfo.gid\n )\n\n def run(self):\n LOG.info('Applying nova statedir ownership')\n LOG.info('Target ownership for %s: %d:%d',\n self.statedir,\n self.target_uid,\n self.target_gid)\n\n pathinfo = PathManager(self.statedir)\n LOG.info(\"Checking %s\", pathinfo)\n pathinfo.chown(self.target_uid, self.target_gid)\n\n self._walk(self.statedir)\n\n if self.upgrade:\n LOG.info('Removing upgrade_marker %s',\n self.upgrade_marker_path)\n os.unlink(self.upgrade_marker_path)\n\n LOG.info('Nova statedir ownership complete')\n\nif __name__ == '__main__':\n NovaStatedirOwnershipManager('/var/lib/nova').run()\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} > >TASK [Set docker_config_default fact] ****************************************** >Friday 21 September 2018 08:42:50 -0400 (0:00:00.138) 0:26:13.033 ****** >skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > >TASK [Set docker_startup_configs_with_default fact] **************************** >Friday 21 September 2018 08:42:50 -0400 (0:00:00.171) 0:26:13.204 ****** >skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > >TASK [Write docker-container-startup-configs] ********************************** >Friday 21 September 2018 08:42:50 -0400 (0:00:00.107) 0:26:13.312 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write per-step docker-container-startup-configs] ************************* >Friday 21 September 2018 08:42:50 -0400 (0:00:00.110) 0:26:13.422 ****** >skipping: [compute-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'nova_statedir_owner': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1', 'command': u'/docker-config-scripts/nova_statedir_ownership.py', 'user': u'root', 'volumes': [u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/docker-config-scripts/:/docker-config-scripts/'], 'detach': False, 'privileged': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_libvirt': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/var/log/containers/libvirt:/var/log/libvirt', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro', u'/var/lib/vhost_sockets:/var/lib/vhost_sockets', u'/sys/fs/selinux:/sys/fs/selinux'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_virtlogd': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/etc/libvirt/qemu:/etc/libvirt/qemu:ro', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu'], 'net': u'host', 'privileged': True, 'restart': u'always'}}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_statedir_owner": {"command": "/docker-config-scripts/nova_statedir_ownership.py", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/lib/nova:/var/lib/nova:shared", "/var/lib/docker-config-scripts/:/docker-config-scripts/"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'cinder_volume_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_image_tag': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_data_ownership': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'command': [u'chown', u'-R', u'mysql:', u'/var/lib/mysql'], 'user': u'root', 'volumes': [u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'redis_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_bootstrap': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'DB_MAX_TIMEOUT=60', u'DB_CLUSTERCHECK_PASSWORD=01uMEtrcy1XQLgnZ0spBcEeFG', u'DB_ROOT_PASSWORD=VmByi3iDWE'], 'command': [u'bash', u'-ec', u'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'${DB_CLUSTERCHECK_PASSWORD}\';"\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" shutdown'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'haproxy_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'cinder_backup_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_bootstrap': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'RABBITMQ_CLUSTER_COOKIE=bo2CgGlbFlVu6tTAeUPw'], 'volumes': [u'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro', u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/var/lib/rabbitmq:/var/lib/rabbitmq'], 'net': u'host', 'privileged': False}, 'memcached': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=01uMEtrcy1XQLgnZ0spBcEeFG", "DB_ROOT_PASSWORD=VmByi3iDWE"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=bo2CgGlbFlVu6tTAeUPw"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}}, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item={'value': {}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'ceilometer_agent_compute': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/run/libvirt:/var/run/libvirt:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_libvirt_init_secret': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u"/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '8fedf068-bd95-11e8-ba69-5254006eda59' --base64 'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw=='"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt'], 'detach': False, 'privileged': False}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_migration_target': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ssh/:/host-ssh/:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_compute': {'ipc': u'host', 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/dev:/dev', u'/lib/modules:/lib/modules:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/libvirt:/var/lib/libvirt', u'/sys/class/net:/sys/class/net', u'/sys/bus/pci:/sys/bus/pci'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '8fedf068-bd95-11e8-ba69-5254006eda59' --base64 'AQC93KRbAAAAABAA70vXmXELJWdqPtg4IeQHzw=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'nova_placement': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd', u'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'restart': u'always'}, 'swift_rsync_fix': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'sed -i "/pid file/d" /var/lib/kolla/config_files/src/etc/rsyncd.conf'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:rw'], 'net': u'host', 'detach': False}, 'nova_db_sync': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'heat_engine_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_copy_rings': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1', 'detach': False, 'command': [u'/bin/bash', u'-c', u'cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw', u'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro']}, 'nova_api_ensure_default_cell': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro'], 'net': u'host', 'detach': False}, 'keystone_cron': {'start_order': 4, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/bin/bash', u'-c', u'/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'panko_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/panko/etc/panko:/etc/panko:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_backup_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'nova_api_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'keystone_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/usr/bin/bootstrap_host_exec', u'keystone', u'/usr/local/bin/kolla_start'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'detach': False, 'privileged': False}, 'ceilometer_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'start_order': 0, 'volumes': [u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'user': u'root'}, 'keystone': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1', 'command': u'/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'cinder_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_api', u"su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'nova_api_map_cell0': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'glance_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': u"/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'detach': False, 'privileged': False}, 'neutron_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1', 'command': [u'/usr/bin/bootstrap_host_exec', u'neutron_api', u'neutron-db-manage', u'upgrade', u'heads'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro', u'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'sahara_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1', 'command': u"/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'detach': False, 'privileged': False}, 'keystone_bootstrap': {'action': u'exec', 'start_order': 3, 'command': [u'keystone', u'/usr/bin/bootstrap_host_exec', u'keystone', u'keystone-manage', u'bootstrap', u'--bootstrap-password', u'wIdMrXYZVQy05wYJArw8Vja2H'], 'user': u'root'}, 'horizon': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'ENABLE_IRONIC=yes', u'ENABLE_MANILA=yes', u'ENABLE_HEAT=yes', u'ENABLE_MISTRAL=yes', u'ENABLE_OCTAVIA=yes', u'ENABLE_SAHARA=yes', u'ENABLE_CLOUDKITTY=no', u'ENABLE_FREEZER=no', u'ENABLE_FWAAS=no', u'ENABLE_KARBOR=no', u'ENABLE_DESIGNATE=no', u'ENABLE_MAGNUM=no', u'ENABLE_MURANO=no', u'ENABLE_NEUTRON_LBAAS=no', u'ENABLE_SEARCHLIGHT=no', u'ENABLE_SENLIN=no', u'ENABLE_SOLUM=no', u'ENABLE_TACKER=no', u'ENABLE_TROVE=no', u'ENABLE_WATCHER=no', u'ENABLE_ZAQAR=no', u'ENABLE_ZUN=no'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/www/:/var/www/:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_setup_srv': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1', 'command': [u'chown', u'-R', u'swift:', u'/srv/node'], 'user': u'root', 'volumes': [u'/srv/node:/srv/node']}}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-20.1", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "wIdMrXYZVQy05wYJArw8Vja2H"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_rsync_fix": {"command": ["/bin/bash", "-c", "sed -i \"/pid file/d\" /var/lib/kolla/config_files/src/etc/rsyncd.conf"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:rw"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1", "user": "root", "volumes": ["/srv/node:/srv/node"]}}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'gnocchi_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/log/gnocchi'], 'user': u'root', 'volumes': [u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd']}, 'mysql_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/mysql:/var/lib/mysql:rw'], 'net': u'host', 'detach': False}, 'gnocchi_init_lib': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/lib/gnocchi'], 'user': u'root', 'volumes': [u'/var/lib/gnocchi:/var/lib/gnocchi']}, 'cinder_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'user': u'root'}, 'create_dnsmasq_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-20.1', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'panko_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R panko:panko /var/log/panko'], 'user': u'root', 'volumes': [u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd']}, 'redis_init_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'config_volume': u'redis_init_bundle', 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_scheduler_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'user': u'root'}, 'glance_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R glance:glance /var/log/glance'], 'privileged': False, 'volumes': [u'/var/log/containers/glance:/var/log/glance'], 'user': u'root'}, 'clustercheck': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'restart': u'always'}, 'haproxy_init_bundle': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro', u'/etc/sysconfig:/etc/sysconfig:rw', u'/usr/libexec/iptables:/usr/libexec/iptables:ro', u'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False, 'privileged': True}, 'neutron_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R neutron:neutron /var/log/neutron'], 'privileged': False, 'volumes': [u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd'], 'user': u'root'}, 'mysql_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1', 'config_volume': u'mysql', 'command': [u'/usr/bin/bootstrap_host_exec', u'mysql', u'if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo "galera-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'rabbitmq_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/bin/true:/bin/epmd'], 'net': u'host', 'detach': False}, 'nova_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd'], 'user': u'root'}, 'haproxy_restart_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1', 'config_volume': u'haproxy', 'command': [u'/usr/bin/bootstrap_host_exec', u'haproxy', u'if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo "haproxy-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'create_keepalived_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-20.1', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::l3_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'rabbitmq_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1', 'config_volume': u'rabbitmq', 'command': [u'/usr/bin/bootstrap_host_exec', u'rabbitmq', u'if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'horizon_fix_perms': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'user': u'root', 'volumes': [u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard']}, 'aodh_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R aodh:aodh /var/log/aodh'], 'user': u'root', 'volumes': [u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd']}, 'nova_metadata_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova'], 'user': u'root'}, 'redis_restart_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1', 'config_volume': u'redis', 'command': [u'/usr/bin/bootstrap_host_exec', u'redis', u'if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo "redis-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'heat_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R heat:heat /var/log/heat'], 'user': u'root', 'volumes': [u'/var/log/containers/heat:/var/log/heat']}, 'nova_placement_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'start_order': 1, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd'], 'user': u'root'}, 'keystone_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1', 'command': [u'/bin/bash', u'-c', u'chown -R keystone:keystone /var/log/keystone'], 'start_order': 1, 'volumes': [u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd'], 'user': u'root'}}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-20.1", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-20.1", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-09-20.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-09-20.1", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-09-20.1", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}}, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item={'value': {'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}}, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'cinder_volume_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'gnocchi_api': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'gnocchi_statsd': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_backup', u'if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo "openstack-cinder-backup restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_metricd': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_discover_hosts': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro'], 'net': u'host', 'detach': False}, 'ceilometer_gnocchi_upgrade': {'start_order': 99, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1', 'command': [u'/usr/bin/bootstrap_host_exec', u'ceilometer_agent_central', u"su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 30; done; exit 1'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_volume', u'if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'cinder_backup_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1537531337'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'gnocchi_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro'], 'net': u'host', 'detach': False, 'privileged': False}}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 30; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1", "net": "host", "privileged": false, "start_order": 99, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-09-20.1", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-09-20.1", "net": "host", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1537531337"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'swift_container_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_evaluator': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'/run:/run'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'cinder_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_proxy': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/run:/run', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'restart': u'always'}, 'neutron_dhcp': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', u'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_metadata_agent': {'start_order': 10, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'ceilometer_agent_central': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'keystone_refresh': {'action': u'exec', 'start_order': 1, 'command': [u'keystone', u'pkill', u'--signal', u'USR1', u'httpd'], 'user': u'root'}, 'swift_account_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_notifier': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_consoleauth': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'glance_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_reaper': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'ceilometer_agent_notification': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_vnc_proxy': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_rsync': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'aodh_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_metadata': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_l3_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro', u'/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro', u'/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_conductor': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'sahara_api': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_container_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_expirer': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'heat_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'panko_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_listener': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_api': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'heat_api_cfn': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-20.1', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-09-20.1", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server-opendaylight:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-20.1", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-09-20.1", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-09-20.1", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-09-20.1", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} > >TASK [Create /var/lib/kolla/config_files directory] **************************** >Friday 21 September 2018 08:42:51 -0400 (0:00:00.640) 0:26:14.063 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write kolla config json files] ******************************************* >Friday 21 September 2018 08:42:51 -0400 (0:00:00.119) 0:26:14.183 ****** >skipping: [ceph-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': u'/var/lib/kolla/config_files/iscsid.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/libvirtd', 'permissions': [{'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/nova_libvirt.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_libvirt.json", "value": {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ssh/', 'owner': u'root', 'perm': u'0600', 'source': u'/host-ssh/ssh_host_*_key'}], 'command': u'/usr/sbin/sshd -D -p 2022'}, 'key': u'/var/lib/kolla/config_files/nova-migration-target.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova-migration-target.json", "value": {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf'}, 'key': u'/var/lib/kolla/config_files/nova_virtlogd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_virtlogd.json", "value": {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/nova-compute ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/nova_compute.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_compute.json", "value": {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_compute.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/keystone.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/keystone.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/lib/cinder', 'recurse': True}, {'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_backup.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_backup.json", "value": {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/swift_proxy_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-auditor /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_auditor.json", "value": {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-replicator /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_replicator.json", "value": {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-notifier', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_notifier.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_notifier.json", "value": {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-scheduler ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_scheduler.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_scheduler.json", "value": {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/certs/neutron.crt'}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/private/neutron.key'}]}, 'key': u'/var/lib/kolla/config_files/neutron_dhcp.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_dhcp.json", "value": {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg', 'permissions': [{'owner': u'haproxy:haproxy', 'path': u'/var/lib/haproxy', 'recurse': True}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/certs/haproxy/*', 'optional': True, 'perm': u'0600'}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/private/haproxy/*', 'optional': True, 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/haproxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/haproxy.json", "value": {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_db_sync.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "value": {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-reaper /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_reaper.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_reaper.json", "value": {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/sahara-engine.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/sahara-engine.json", "value": {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'redis:redis', 'path': u'/var/run/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/lib/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/log/redis', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/redis.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/redis.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-novncproxy --web /usr/share/novnc/ ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}, {'owner': u'root:nova', 'path': u'/etc/pki/tls/private/novnc_proxy.key'}]}, 'key': u'/var/lib/kolla/config_files/nova_vnc_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "value": {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "root:nova", "path": "/etc/pki/tls/private/novnc_proxy.key"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf', 'permissions': [{'owner': u'glance:glance', 'path': u'/var/lib/glance', 'recurse': True}, {'owner': u'glance:glance', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/glance_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/glance_api.json", "value": {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-auditor /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_auditor.json", "value": {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-panko/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log', 'permissions': [{'owner': u'root:ceilometer', 'path': u'/etc/panko', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_notification.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "value": {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_expirer.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_expirer.json", "value": {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_central.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf'}, 'key': u'/var/lib/kolla/config_files/swift_rsync.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_rsync.json", "value": {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-server /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_server.json", "value": {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_proxy.json", "value": {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-updater /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_updater.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_updater.json", "value": {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/xinetd -dontfork'}, 'key': u'/var/lib/kolla/config_files/clustercheck.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/clustercheck.json", "value": {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'mysql:mysql', 'path': u'/var/log/mysql', 'recurse': True}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/certs/mysql.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/private/mysql.key', 'optional': True, 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/mysql.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/mysql.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_placement.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_placement.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/sahara-api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/sahara-api.json", "value": {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'keystone:keystone', 'path': u'/var/log/keystone', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/keystone_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/keystone_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/neutron_server_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-replicator /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_replicator.json", "value": {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-conductor ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_conductor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_conductor.json", "value": {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api_cfn.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api_cfn.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-api-metadata ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_metadata.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_metadata.json", "value": {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'rabbitmq:rabbitmq', 'path': u'/var/lib/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/var/log/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/certs/rabbitmq.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/private/rabbitmq.key', 'optional': True, 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/rabbitmq.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/rabbitmq.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-consoleauth ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_consoleauth.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_consoleauth.json", "value": {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-updater /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_updater.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_updater.json", "value": {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_api.json", "value": {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_scheduler.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_scheduler.json", "value": {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-metricd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_metricd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_metricd.json", "value": {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_metadata_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "value": {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-replicator /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_replicator.json", "value": {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_engine.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_engine.json", "value": {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-server /etc/swift/object-server.conf', 'permissions': [{'owner': u'swift:swift', 'path': u'/var/cache/swift', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/swift_object_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_server.json", "value": {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'stunnel /etc/stunnel/stunnel.conf', 'permissions': [{'owner': u'root:root', 'path': u'/etc/pki/tls/certs/redis.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'root:root', 'path': u'/etc/pki/tls/private/redis.key', 'optional': True, 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/redis_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/redis_tls_proxy.json", "value": {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"optional": true, "owner": "root:root", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "root:root", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_volume.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_volume.json", "value": {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'panko:panko', 'path': u'/var/log/panko', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/panko_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/panko_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-auditor /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_auditor.json", "value": {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_l3_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_l3_agent.json", "value": {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-listener', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_listener.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_listener.json", "value": {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-server /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_server.json", "value": {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-evaluator', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_evaluator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_evaluator.json", "value": {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/glance_api_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': u'/var/lib/kolla/config_files/iscsid.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-statsd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_statsd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_statsd.json", "value": {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'apache:apache', 'path': u'/var/log/horizon/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/etc/openstack-dashboard/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/', 'recurse': False}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/', 'recurse': False}]}, 'key': u'/var/lib/kolla/config_files/horizon.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/horizon.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}}, "skip_reason": "Conditional result was False"} > >TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >Friday 21 September 2018 08:42:52 -0400 (0:00:00.842) 0:26:15.025 ****** > [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >TASK [Write docker-puppet-tasks json files] ************************************ >Friday 21 September 2018 08:42:52 -0400 (0:00:00.109) 0:26:15.135 ****** >skipping: [controller-0] => (item={'value': [{'puppet_tags': u'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain', 'config_volume': u'keystone_init_tasks', 'step_config': u'include ::tripleo::profile::base::keystone', 'config_image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1'}], 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-09-20.1", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]}, "skip_reason": "Conditional result was False"} >skipping: [controller-0] => (item={'value': [{'puppet_tags': u'cinder_config,cinder_type,file,concat,file_line', 'config_volume': u'cinder_init_tasks', 'step_config': u'include ::tripleo::profile::base::cinder::api', 'config_image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1', 'volumes': [u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro']}], 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": [{"config_image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-09-20.1", "config_volume": "cinder_init_tasks", "puppet_tags": "cinder_config,cinder_type,file,concat,file_line", "step_config": "include ::tripleo::profile::base::cinder::api", "volumes": ["/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro"]}]}, "skip_reason": "Conditional result was False"} > >TASK [Set host puppet debugging fact string] *********************************** >Friday 21 September 2018 08:42:52 -0400 (0:00:00.115) 0:26:15.250 ****** >skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} > >TASK [Write the config_step hieradata] ***************************************** >Friday 21 September 2018 08:42:52 -0400 (0:00:00.106) 0:26:15.356 ****** >changed: [controller-0] => {"changed": true, "checksum": "ee48fb03297eb703b1954c8852d0f67fab51dac1", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "e66511bcb9efc937174b88035d019e7b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537533772.92-195717261995118/source", "state": "file", "uid": 0} >changed: [ceph-0] => {"changed": true, "checksum": "ee48fb03297eb703b1954c8852d0f67fab51dac1", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "e66511bcb9efc937174b88035d019e7b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537533772.98-41611449674741/source", "state": "file", "uid": 0} >changed: [compute-0] => {"changed": true, "checksum": "ee48fb03297eb703b1954c8852d0f67fab51dac1", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "e66511bcb9efc937174b88035d019e7b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1537533772.96-107373916762517/source", "state": "file", "uid": 0} > >TASK [Run puppet host configuration for step 4] ******************************** >Friday 21 September 2018 08:42:53 -0400 (0:00:00.708) 0:26:16.065 ****** >changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} > >TASK [Debug output for task which failed: Run puppet host configuration for step 4] *** >Friday 21 September 2018 08:43:14 -0400 (0:00:20.459) 0:26:36.525 ****** >ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.19 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller4]/ensure: created", > "Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{md5}8307434bc8ed4e2a7df4928fb4232778' to '{md5}791a4f063445fffc2f2f4e02def34343'", > "Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{md5}e914149a715dc82812a989314c026305' to '{md5}1483b6eecf3d4796dac2df692d603719'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{md5}913e2613413a45daa402d0fbdbaba676' to '{md5}0f92e52f70b5c64864657201eb9581bb'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{md5}4496fd5e0e88e764e7beb1ae8f0dda6a' to '{md5}01f68b1480c1ec4e3cc125434dd612a0'", > "Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully", > "Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running'", > "Notice: Applied catalog in 9.15 seconds", > "Changes:", > " Total: 8", > "Events:", > " Success: 8", > "Resources:", > " Corrective change: 1", > " Restarted: 1", > " Total: 225", > " Out of sync: 8", > " Changed: 8", > "Time:", > " Concat file: 0.00", > " File line: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " User: 0.00", > " Package manifest: 0.00", > " Sysctl: 0.00", > " Sysctl runtime: 0.00", > " Augeas: 0.02", > " Firewall: 0.02", > " File: 0.24", > " Pcmk property: 0.37", > " Pcmk resource default: 0.39", > " Package: 0.42", > " Service: 0.57", > " Total: 11.70", > " Last run: 1537533793", > " Config retrieval: 3.77", > " Exec: 5.89", > " Filebucket: 0.00", > " Concat fragment: 0.00", > "Version:", > " Config: 1537533780", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/snmp/manifests/params.pp\", 310]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 40]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.58 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Compute::Libvirt_guests/File[/etc/systemd/system/virt-guest-shutdown.target.wants]/ensure: created", > "Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{md5}8307434bc8ed4e2a7df4928fb4232778' to '{md5}e53024c9f88d61f121c931b888fad8aa'", > "Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{md5}e914149a715dc82812a989314c026305' to '{md5}1483b6eecf3d4796dac2df692d603719'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{md5}913e2613413a45daa402d0fbdbaba676' to '{md5}0f92e52f70b5c64864657201eb9581bb'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{md5}4496fd5e0e88e764e7beb1ae8f0dda6a' to '{md5}01f68b1480c1ec4e3cc125434dd612a0'", > "Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Compute::Libvirt_guests/Systemd::Unit_file[paunch-container-shutdown.service]/File[/etc/systemd/system/virt-guest-shutdown.target.wants/paunch-container-shutdown.service]/ensure: created", > "Notice: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Nova::Compute::Libvirt_guests/File_line[/etc/sysconfig/libvirt-guests ON_BOOT]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt_guests/File_line[/etc/sysconfig/libvirt-guests ON_SHUTDOWN]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt_guests/File_line[/etc/sysconfig/libvirt-guests SHUTDOWN_TIMEOUT]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt_guests/Nova::Generic_service[libvirt-guests]/Service[nova-libvirt-guests]: Triggered 'refresh' from 3 events", > "Notice: /Stage[main]/Nova::Deps/Anchor[nova::service::end]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully", > "Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running'", > "Notice: Applied catalog in 6.94 seconds", > "Changes:", > " Total: 13", > "Events:", > " Success: 13", > "Resources:", > " Corrective change: 1", > " Changed: 13", > " Out of sync: 13", > " Total: 173", > " Restarted: 4", > "Time:", > " Filebucket: 0.00", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Schedule: 0.00", > " Cron: 0.00", > " Package manifest: 0.00", > " Anchor: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.01", > " File line: 0.01", > " Firewall: 0.01", > " Augeas: 0.02", > " File: 0.21", > " Package: 0.32", > " Service: 0.48", > " Last run: 1537533790", > " Config retrieval: 2.97", > " Exec: 5.24", > " Total: 9.26", > "Version:", > " Config: 1537533781", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/snmp/manifests/params.pp\", 310]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 39]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > "Warning: Unknown variable: 'service_ensure'. at /etc/puppet/modules/nova/manifests/generic_service.pp:68:20", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 2.47 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage4]/ensure: created", > "Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{md5}8307434bc8ed4e2a7df4928fb4232778' to '{md5}b83ceb79c465111180fdb40c20912bae'", > "Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{md5}e914149a715dc82812a989314c026305' to '{md5}1483b6eecf3d4796dac2df692d603719'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{md5}913e2613413a45daa402d0fbdbaba676' to '{md5}0f92e52f70b5c64864657201eb9581bb'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{md5}4496fd5e0e88e764e7beb1ae8f0dda6a' to '{md5}01f68b1480c1ec4e3cc125434dd612a0'", > "Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully", > "Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running'", > "Notice: Applied catalog in 7.03 seconds", > "Changes:", > " Total: 8", > "Events:", > " Success: 8", > "Resources:", > " Corrective change: 1", > " Restarted: 1", > " Total: 143", > " Out of sync: 8", > " Changed: 8", > "Time:", > " Concat file: 0.00", > " Anchor: 0.00", > " Schedule: 0.00", > " Cron: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.01", > " Firewall: 0.01", > " Augeas: 0.02", > " File: 0.16", > " Package: 0.29", > " Service: 0.57", > " Last run: 1537533790", > " Config retrieval: 2.88", > " Exec: 5.23", > " Concat fragment: 0.00", > " Filebucket: 0.00", > " Total: 9.16", > "Version:", > " Config: 1537533780", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/snmp/manifests/params.pp\", 310]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 38]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} > >TASK [Run docker-puppet tasks (generate config) during step 4] ***************** >Friday 21 September 2018 08:43:14 -0400 (0:00:00.208) 0:26:36.733 ****** >skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > >TASK [Debug output for task which failed: Run docker-puppet tasks (generate config) during step 4] *** >Friday 21 September 2018 08:43:14 -0400 (0:00:00.106) 0:26:36.839 ****** >skipping: [controller-0] => {} >skipping: [compute-0] => {} >skipping: [ceph-0] => {} > >TASK [Start containers for step 4] ********************************************* >Friday 21 September 2018 08:43:14 -0400 (0:00:00.102) 0:26:36.942 ****** >ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > >ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} > >TASK [Debug output for task which failed: Start containers for step 4] ********* >Friday 21 September 2018 08:43:43 -0400 (0:00:29.474) 0:27:06.416 ****** >ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-aodh-evaluator ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-aodh-evaluator", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "86a0e618a180: Already exists", > "dfa58d50e0a3: Already exists", > "1877222ec238: Already exists", > "0b3274d39c69: Pulling fs layer", > "0b3274d39c69: Download complete", > "0b3274d39c69: Pull complete", > "Digest: sha256:9c1318d58b430b4ac20f6dedd3ba50af301c643b5009c6de54ce3312b60559e2", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-09-20.1", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-aodh-listener ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-aodh-listener", > "6f3ca69cc690: Pulling fs layer", > "6f3ca69cc690: Download complete", > "6f3ca69cc690: Pull complete", > "Digest: sha256:1b2dbf44e247184783365e6f57c2cf4937c7bed876e7afccbd83bd6788082fa7", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-09-20.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-aodh-notifier ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-aodh-notifier", > "2f77f1ad8b0b: Pulling fs layer", > "2f77f1ad8b0b: Verifying Checksum", > "2f77f1ad8b0b: Download complete", > "2f77f1ad8b0b: Pull complete", > "Digest: sha256:56bbb6fadd1ed919d5879c1807d945204f734621af290578eeaefa772697d849", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-09-20.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent", > "763394b9c1e7: Already exists", > "06529574484a: Pulling fs layer", > "06529574484a: Verifying Checksum", > "06529574484a: Download complete", > "06529574484a: Pull complete", > "Digest: sha256:6e5c0f73ecdbcd16e5b3e1c1ff15ca683099d6475f98d3b8ebb17f0d906904ff", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-09-20.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent", > "173aae437f5e: Pulling fs layer", > "173aae437f5e: Download complete", > "173aae437f5e: Pull complete", > "Digest: sha256:91bd7c19c7670afd97c29d6ff28d701e00df9b6d27c1e35732fb9cfc4713c1ab", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-20.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-conductor ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-conductor", > "2d54cceaa5bd: Already exists", > "5343244ebeaf: Pulling fs layer", > "5343244ebeaf: Verifying Checksum", > "5343244ebeaf: Download complete", > "5343244ebeaf: Pull complete", > "Digest: sha256:9822fb7b904505c9b578718a7aa3d9b546978e46004479fcb1d5713c1cb57602", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-09-20.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-consoleauth ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-consoleauth", > "58828e2f10b1: Pulling fs layer", > "58828e2f10b1: Verifying Checksum", > "58828e2f10b1: Download complete", > "58828e2f10b1: Pull complete", > "Digest: sha256:48308d45c57c5a9d6e815b35f9bedf00b0599c02228ceb65f51f4c28655541cb", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-09-20.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-novncproxy ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-novncproxy", > "1194a2afd360: Pulling fs layer", > "1194a2afd360: Verifying Checksum", > "1194a2afd360: Download complete", > "1194a2afd360: Pull complete", > "Digest: sha256:990e26b567ed83a8ec96a420d851f5a3706cb8e9b6c62c28d24ccf984916346b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-09-20.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-scheduler ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-scheduler", > "c6642dba55d4: Pulling fs layer", > "c6642dba55d4: Verifying Checksum", > "c6642dba55d4: Download complete", > "c6642dba55d4: Pull complete", > "Digest: sha256:1e0feeca1ad485c9de83197bf6c4789712b774332c25db11fab734cdb51150b1", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-09-20.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-sahara-engine ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-sahara-engine", > "e9b7a8f97fff: Already exists", > "df00bc56ae2a: Pulling fs layer", > "df00bc56ae2a: Verifying Checksum", > "df00bc56ae2a: Download complete", > "df00bc56ae2a: Pull complete", > "Digest: sha256:47e710f4e59566be575b86bf73f5d584262f433e1c0ed6ff969479611c5f0a5c", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-09-20.1", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-swift-container ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-swift-container", > "d006a62af35a: Already exists", > "3ce0ede291d8: Pulling fs layer", > "3ce0ede291d8: Verifying Checksum", > "3ce0ede291d8: Download complete", > "3ce0ede291d8: Pull complete", > "Digest: sha256:9a5770f068ee818c9fd366a32a9fa08dad74b0e0138417b1e10549ca0b4f6613", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-swift-container:2018-09-20.1", > "stdout: b391d30766c2c341e630b196b388a84786fa4c75ff509b771582db6435ef2d5b", > "stdout: 14ab01939ef5262ffbdba6ffb95aac1ebaa7fdb6e200d931ec594d2b1c580901", > "stdout: f12073a39ec5bb737927e95886ddb8187b46063c9cc73868e591f2547e51ff07", > "stdout: 66fc498b9877f87e5b63ae38cbef423c3fce9382800a611d82622c9202c9de79", > "stdout: 399a58a4bb9411785394f2d99cb2a6478e25a1558e00825b744b34a097d46f12", > "stdout: 4f955e4ea33b262acf271d9dcc0dc223b860d5f3c1466945bfe915fcb6f2f610", > "stdout: 00e43d7bcb145d2f8a681fbe25ccb44c9aeb54367c3a2d2bc17455443896310c", > "stdout: 3edb1e1423add71fe9dc110028cd3651658379e8b0597a53e5953e1db258ca79", > "stdout: 23cbb1e741f3e8626577843e03ad7335dbbbe233f91cae53b2977289dec3a510", > "stdout: 8655490de3048b6d6829a1a146e05f6f2dffc0f80d91543c777ebb94d0041244", > "stdout: baec1e91be6b6e5becc9e33454c7321737ca5b6f3b3c8a3c64943c0cb2bc2b59", > "stdout: 533c71c57f25da6740ada5c0ebd6e60c91afc375c3c239656ac3ff6abca5c460", > "stdout: ef21d9c14a05350b6ab1bf5807053d1637c9c467dd460d191eeb80030125383f", > "stdout: a3ab17c6fbdf2177eb8719f36bd762555b37d8e8cea05a58f08087c74763a82a", > "stdout: 6dbb66bded45af1ab95087dc43a4eb78a04dea0d773a1b37e7304d4d89f8d30c", > "stdout: d5cdaabe624fcc2ba370ba9caac48737135f1689b96bce56fa9a268cc6aa953f", > "stdout: 0c1bab66b346c142059f3bea4cd2085a844fd442a2bf3e0279acdd4ecc461127", > "stdout: 6902f0d8a8451c7e8860816cf0490b2d2e4bc7468142880fa838c45f1b66132b", > "stdout: aef32f2f7158b92ee512f5b1279c2176fb99980960d53928039c28bcd36f42fe", > "stdout: fbe27fc0de3ec9586872c8e6738e1d6cc043b01e4153690f6416fe6558752b5a", > "stdout: a6c7525627b8787012748c2d99c1df59402686b72989b6561e9fada37a31d31a", > "stdout: 138358f903954b3ebad48c475c30bc63c0325361f064db4bf35bcd23024d971d", > "stdout: 2bf4a9a880b88a0ab67e2e823aaea9339ff114be5d366067ed96443078069e45", > "stdout: 54cc75cb1bd642274c5b9457a5c60014d44fc44838997a7165103f4c2fd2626f", > "stdout: b74694759b027b6efe54623fa5626d72459cb556d9bf8a3977a608993c447416", > "stdout: 4fbc6999a5cc550c98ffc6dd2fa959634fc37fb4c2b72f12e43acbc1a0d4c058", > "stdout: 5a23966ad9ba6027f2038cf4725cd96f769495eb3f12be23a123ac7ce685fb1c", > "stdout: ec2362455e29d9483f4a971ee064c80f2e28c161281a85087f8325efdc0b1265", > "stdout: b2e669b3c8dd6b07560eb311467d93462d84494d0933084f54f1900ef287cd08", > "stdout: e13a3a3c6508e174cb540dbced98763c18e30ca29a464b6f487f4879f39c5b6c", > "stdout: ebd2269b41065e301101c595b544fd0dbd268f9b9b0ff80bdff766436ac3e51b", > "stdout: c6ad98bc3e0de658bb9e7518c9ab7e222578548df36da54ca9f748f99d8e3b39", > "stdout: 17e97d6d47287d7bcfe86748d9de09628ef781c851fcce655ad038442ac5060d", > "stdout: 1c41dcc709650a68e134820a2e355f3de38bd4d7e9e65e5509c4574e14f8f532", > "stdout: a116b8f49a596b2fc6f3b976c771705e6464e8d878b2754d4dd2cd13d8ce346b", > "stdout: 077248950f230b055c563e8875092e1b41bc93304c3654555a83e825a4fa7c7f", > "stdout: ", > "stdout: 7748fa271614e814f24b0c1a7af7526e311a8ecae1d61e194cff1669ff84a06a", > "stdout: 63359127ebfa495beeddf831795acb0f2a7ae777f51ea7568d36cf0079e2861f", > "stdout: 1d326cd4d00e0832b0e906e1cb3bea78086e9c232762c63f5bee97964ce07538", > "stdout: 904d06703b27714632490507b3543a5a4ee3390647a27143a88e6792751b7470", > "stdout: 005efa2424bb5273daeec8edd367e20acf7af43038bc4f75a71724b872cf8919", > "stdout: 6c8c46ec98d2307eb76d0a8f5b3b71630d0bcd3e8dcbb232d30fbf18b663dd6e", > "stdout: 56c96753a21e3158d5d7f96d601c41101d7a66adb1492239054e4d817847d768", > "stdout: df628ab9866b8b0952b7ce71639ae87905b47e2152e2d82eb63d8b3d0491ee64", > "stdout: 70789cd665f4184886de44f65589cc045c51fbd5626612243736468a0ff4df50" > ] >} >ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-compute ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-compute", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "86a0e618a180: Already exists", > "dfa58d50e0a3: Already exists", > "39327dc96373: Already exists", > "b134b7ef4b99: Pulling fs layer", > "b134b7ef4b99: Verifying Checksum", > "b134b7ef4b99: Download complete", > "b134b7ef4b99: Pull complete", > "Digest: sha256:3c6c616e552f28622a131bb0140d7bd37fede0832d4d7a0559642fe2c6d94ec5", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-09-20.1", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent ... ", > "2018-09-20.1: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent", > "763394b9c1e7: Already exists", > "173aae437f5e: Pulling fs layer", > "173aae437f5e: Download complete", > "173aae437f5e: Pull complete", > "Digest: sha256:91bd7c19c7670afd97c29d6ff28d701e00df9b6d27c1e35732fb9cfc4713c1ab", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-09-20.1", > "stdout: 4a5a693210bee265fb784a92b62b0323e3e5641a6a7d9e4902c00c9e0fa75e56", > "stdout: 138636e939b1e52893a0132cdac3036e41039cf15019a344ce354d2820e55e76", > "stdout: e02d67eb978f4406fbc4b71697081904790f858bbc3bfc7ae93d4726ae813762", > "stdout: Secret 8fedf068-bd95-11e8-ba69-5254006eda59 created", > "Secret value set", > "stdout: d53d04df4efa275afe9b8e11b71bdd3cbc0d0030927899734d1b997bfae1ac42", > "stdout: d8c488ff4272de17704d796605d38867895f646217bdb12e516beada2db09f44" > ] >} >ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: ce811316fb3f82468d810fa1199e2d430376ee3e6307b844a4bd270b45b26442", > "", > "stderr: " > ] >} > >TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks4.json exists] ******** >Friday 21 September 2018 08:43:44 -0400 (0:00:00.217) 0:27:06.634 ****** >ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >ok: [controller-0] => {"changed": false, "stat": {"atime": 1537532467.8883862, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "fc092b06b8d6fdc6d18320ab604ab9a6ebc1e1ae", "ctime": 1537532467.8933864, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 41943523, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0600", "mtime": 1537532467.7203863, "nlink": 1, "path": "/var/lib/docker-puppet/docker-puppet-tasks4.json", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 321, "uid": 0, "version": "18446744073615036390", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} > >TASK [Run docker-puppet tasks (bootstrap tasks) for step 4] ******************** >Friday 21 September 2018 08:43:44 -0400 (0:00:00.456) Connection is already closed. >0:27:07.090 ****** >skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1631789
: 1485633 |
1485634
|
1485635
|
1485648
|
1485661
|
1487374