Bug 1180694 - rubygem-staypuft: neutron deployment with ceph as backend driver gets paused with error: Could not start Service[glance-api]: Execution of '/usr/bin/systemctl start openstack-glance-api' returned 1: Job for openstack-glance-api.service failed.
Summary: rubygem-staypuft: neutron deployment with ceph as backend driver gets paused...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rubygem-staypuft
Version: unspecified
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ga
: Installer
Assignee: Brad P. Crochet
QA Contact: Alexander Chuzhoy
URL:
Whiteboard:
Depends On:
Blocks: 1177026 1181812
TreeView+ depends on / blocked
 
Reported: 2015-01-09 17:36 UTC by Alexander Chuzhoy
Modified: 2015-02-09 15:19 UTC (History)
5 users (show)

Fixed In Version: ruby193-rubygem-staypuft-0.5.0-11.el7ost
Doc Type: Bug Fix
Doc Text:
Choosing a Ceph back-end for Glance caused the glance service to fail to start and the deployment failed with an error message: Could not start Service[glance-api]: Execution of '/usr/bin/systemctl start openstack-glance-api' returned 1: Job for openstack-glance-api.service failed This was due to ceph cluster not being up when trying to start glance. Disabling the glance service by setting the include_glance parameter to False in the initial run has now fixed the issue and the deployment completes succesfully.
Clone Of:
: 1181812 (view as bug list)
Environment:
Last Closed: 2015-02-09 15:19:08 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
foreman logs (17.61 MB, application/x-gzip)
2015-01-09 17:44 UTC, Alexander Chuzhoy
no flags Details
logs - controller (7.00 MB, application/x-gzip)
2015-01-09 17:45 UTC, Alexander Chuzhoy
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:0156 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform Installer Bug Fix Advisory 2015-02-09 20:13:39 UTC

Description Alexander Chuzhoy 2015-01-09 17:36:00 UTC
rubygem-staypuft:  neutron deployment with ceph as backend driver gets paused with error: Could not start Service[glance-api]: Execution of '/usr/bin/systemctl start openstack-glance-api' returned 1: Job for openstack-glance-api.service failed. 


Environment:
ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el7ost.noarch
openstack-foreman-installer-3.0.8-1.el7ost.noarch
ruby193-rubygem-staypuft-0.5.10-1.el7ost.noarch
rhel-osp-installer-client-0.5.5-1.el7ost.noarch
openstack-puppet-modules-2014.2.8-1.el7ost.noarch
rhel-osp-installer-0.5.5-1.el7ost.noarch


Steps to reproduce:
1. install rhel-osp-installer.
2. Create/start neutron deployment (ceph as driver backend) with 1 controller + 1 compute + 2 Ceph Storage Node (OSD).

Result:
The deployment gets paused with errors.
Checking the puppet reports I see the following errors reported on the controller:
1.
Could not start Service[glance-api]: Execution of '/usr/bin/systemctl start openstack-glance-api' returned 1: Job for openstack-glance-api.service failed. See 'systemctl status openstack-glance-api.service' and 'journalctl -xn' for details. Wrapped exception: Execution of '/usr/bin/systemctl start openstack-glance-api' returned 1: Job for openstack-glance-api.service failed. See 'systemctl status openstack-glance-api.service' and 'journalctl -xn' for details.
2.
Could not prefetch mongodb_replset provider 'mongo': Execution of '/usr/bin/mongo --quiet --eval printjson(rs.conf())' returned 1: 2015-01-09T11:57:47.731-0500 warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused 2015-01-09T11:57:47.732-0500 Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146 exception: connect failed



Expected result:
Deployment with ceph backend driver should complete successfully.1

Comment 1 Alexander Chuzhoy 2015-01-09 17:41:07 UTC
yaml from the controller:
---
classes:
  foreman::plugin::staypuft_client:
    staypuft_ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABAQD5k1PUc6AjuVLATboyzuhQqmS+XUbx9SPK4agShZ6xOgi6AbxI5GKWMKR3IBt14D32DUcc4TIFtm1mo8Fp8KwfBJI3fOARt0LnYyNhyoNblFT2epw7fw5jFwnuSYKggM4fczLS0Vz+a6/ko4TCU+avv3pz6IBqABGr/4cd6su209Zm5sRqYPdgKrDaKf4kahInxEyaL5tJGBePbXeAbo5foLO05sB4vH2p/h4LhOrbo3q7uJBQjPIvtVLkYBfZpEsZ9k/HFloFU2gglnY8Jy+wkFvWiVVH00OxXOJvuNHiXKqAPSMPC7QUPjgQYwCkp+O6wJQcDbpJFxVTLwYWR1aX
  foreman::puppet::agent::service:
    runmode: none
  quickstack::openstack_common: 
  quickstack::pacemaker::ceilometer:
    ceilometer_metering_secret: c9206fce6db087eb75b4f497e2cdcab3
    db_port: '27017'
    memcached_port: '11211'
    verbose: 'true'
  quickstack::pacemaker::cinder:
    backend_eqlx: 'false'
    backend_eqlx_name:
    - eqlx
    backend_glusterfs: false
    backend_glusterfs_name: glusterfs
    backend_iscsi: 'false'
    backend_iscsi_name: iscsi
    backend_nfs: 'false'
    backend_nfs_name: nfs
    backend_rbd: 'true'
    backend_rbd_name: rbd
    create_volume_types: true
    db_name: cinder
    db_ssl: false
    db_ssl_ca: ''
    db_user: cinder
    debug: false
    enabled: true
    eqlx_chap_login: []
    eqlx_chap_password: []
    eqlx_group_name: []
    eqlx_pool: []
    eqlx_use_chap: []
    glusterfs_shares: []
    log_facility: LOG_USER
    multiple_backends: 'false'
    nfs_mount_options: nosharecache
    nfs_shares:
    - ''
    qpid_heartbeat: '60'
    rbd_ceph_conf: /etc/ceph/ceph.conf
    rbd_flatten_volume_from_snapshot: 'false'
    rbd_max_clone_depth: '5'
    rbd_pool: volumes
    rbd_secret_uuid: 888b90d3-c675-4cea-9f30-889713a7128a
    rbd_user: volumes
    rpc_backend: cinder.openstack.common.rpc.impl_kombu
    san_ip: []
    san_login: []
    san_password: []
    san_thin_provision: []
    use_syslog: false
    verbose: 'true'
    volume: true
  quickstack::pacemaker::common:
    fence_ipmilan_address: ''
    fence_ipmilan_expose_lanplus: ''
    fence_ipmilan_hostlist: ''
    fence_ipmilan_host_to_address: []
    fence_ipmilan_interval: 60s
    fence_ipmilan_lanplus_options: ''
    fence_ipmilan_password: ''
    fence_ipmilan_username: ''
    fence_xvm_key_file_password: ''
    fence_xvm_manage_key_file: 'false'
    fence_xvm_port: ''
    fencing_type: disabled
    pacemaker_cluster_name: openstack
  quickstack::pacemaker::galera:
    galera_monitor_password: monitor_pass
    galera_monitor_username: monitor_user
    max_connections: '1024'
    mysql_root_password: c443b0024dbb909ae7330701ced94586
    open_files_limit: '-1'
    wsrep_cluster_members:
    - 192.168.0.7
    wsrep_cluster_name: galera_cluster
    wsrep_ssl: true
    wsrep_ssl_cert: /etc/pki/galera/galera.crt
    wsrep_ssl_key: /etc/pki/galera/galera.key
    wsrep_sst_method: rsync
    wsrep_sst_password: sst_pass
    wsrep_sst_username: sst_user
  quickstack::pacemaker::glance:
    backend: rbd
    db_name: glance
    db_ssl: false
    db_ssl_ca: ''
    db_user: glance
    debug: false
    filesystem_store_datadir: /var/lib/glance/images/
    log_facility: LOG_USER
    pcmk_fs_device: ''
    pcmk_fs_dir: /var/lib/glance/images
    pcmk_fs_manage: 'false'
    pcmk_fs_options: ''
    pcmk_fs_type: ''
    pcmk_swift_is_local: true
    rbd_store_pool: images
    rbd_store_user: images
    sql_idle_timeout: '3600'
    swift_store_auth_address: http://127.0.0.1:5000/v2.0/
    swift_store_key: ''
    swift_store_user: ''
    use_syslog: false
    verbose: 'true'
  quickstack::pacemaker::heat:
    db_name: heat
    db_ssl: false
    db_ssl_ca: ''
    db_user: heat
    debug: false
    log_facility: LOG_USER
    qpid_heartbeat: '60'
    use_syslog: false
    verbose: 'true'
  quickstack::pacemaker::horizon:
    horizon_ca: /etc/ipa/ca.crt
    horizon_cert: /etc/pki/tls/certs/PUB_HOST-horizon.crt
    horizon_key: /etc/pki/tls/private/PUB_HOST-horizon.key
    keystone_default_role: _member_
    memcached_port: '11211'
    secret_key: 7faf63edc36f09a01a33a5074c1d6c56
    verbose: 'true'
  quickstack::pacemaker::keystone:
    admin_email: admin
    admin_password: a9b2ef4b06c883a4c206aba72ea9d9c9
    admin_tenant: admin
    admin_token: 6d6f23bae88408f8f8e98dc4023a913f
    ceilometer: 'false'
    cinder: 'true'
    db_name: keystone
    db_ssl: 'false'
    db_ssl_ca: ''
    db_type: mysql
    db_user: keystone
    debug: 'false'
    enabled: 'true'
    glance: 'true'
    heat: 'true'
    heat_cfn: 'false'
    idle_timeout: '200'
    keystonerc: 'true'
    log_facility: LOG_USER
    nova: 'true'
    public_protocol: http
    region: RegionOne
    swift: 'false'
    token_driver: keystone.token.backends.sql.Token
    token_format: PKI
    use_syslog: 'false'
    verbose: 'true'
  quickstack::pacemaker::load_balancer: 
  quickstack::pacemaker::memcached: 
  quickstack::pacemaker::neutron:
    allow_overlapping_ips: true
    cisco_nexus_plugin: neutron.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
    cisco_vswitch_plugin: neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
    core_plugin: neutron.plugins.ml2.plugin.Ml2Plugin
    enabled: true
    enable_tunneling: 'true'
    external_network_bridge: ''
    ml2_flat_networks:
    - ! '*'
    ml2_mechanism_drivers:
    - openvswitch
    - l2population
    ml2_network_vlan_ranges:
    - physnet-external
    ml2_security_group: 'True'
    ml2_tenant_network_types:
    - vxlan
    ml2_tunnel_id_ranges:
    - 10:1000
    ml2_type_drivers:
    - local
    - flat
    - vlan
    - gre
    - vxlan
    ml2_vxlan_group: 224.0.0.1
    n1kv_plugin_additional_params:
      default_policy_profile: default-pp
      network_node_policy_profile: default-pp
      poll_duration: '10'
      http_pool_size: '4'
      http_timeout: '120'
      firewall_driver: neutron.agent.firewall.NoopFirewallDriver
      enable_sync_on_start: 'True'
    n1kv_vsm_ip: ''
    n1kv_vsm_password: ''
    network_device_mtu: ''
    neutron_conf_additional_params:
      default_quota: default
      quota_network: default
      quota_subnet: default
      quota_port: default
      quota_security_group: default
      quota_security_group_rule: default
      network_auto_schedule: default
    nexus_config: {}
    nova_conf_additional_params:
      quota_instances: default
      quota_cores: default
      quota_ram: default
      quota_floating_ips: default
      quota_fixed_ips: default
      quota_driver: default
    ovs_bridge_mappings:
    - physnet-external:br-ex
    ovs_bridge_uplinks:
    - br-ex:ens8
    ovs_tunnel_iface: ens7
    ovs_tunnel_network: ''
    ovs_tunnel_types:
    - vxlan
    ovs_vlan_ranges:
    - physnet-external
    ovs_vxlan_udp_port: '4789'
    security_group_api: neutron
    tenant_network_type: vlan
    tunnel_id_ranges: 1:1000
    verbose: 'true'
    veth_mtu: ''
  quickstack::pacemaker::nosql:
    nosql_port: '27017'
  quickstack::pacemaker::nova:
    auto_assign_floating_ip: 'true'
    db_name: nova
    db_user: nova
    default_floating_pool: nova
    force_dhcp_release: 'false'
    image_service: nova.image.glance.GlanceImageService
    memcached_port: '11211'
    multi_host: 'true'
    neutron_metadata_proxy_secret: 14fe37d35975eddf0a83d98a8336af25
    qpid_heartbeat: '60'
    rpc_backend: nova.openstack.common.rpc.impl_kombu
    scheduler_host_subset_size: '30'
    verbose: 'true'
  quickstack::pacemaker::params:
    amqp_group: amqp
    amqp_password: 8f27fb30168ed9de407f2bb78772897c
    amqp_port: '5672'
    amqp_provider: rabbitmq
    amqp_username: openstack
    amqp_vip: 192.168.0.36
    ceilometer_admin_vip: 192.168.0.2
    ceilometer_group: ceilometer
    ceilometer_private_vip: 192.168.0.3
    ceilometer_public_vip: 192.168.0.4
    ceilometer_user_password: c632d2da7f80dfc1aaa30451597e75ff
    ceph_cluster_network: 192.168.0.0/24
    ceph_fsid: d694dc15-df74-4482-8938-e0672fc3aded
    ceph_images_key: AQDM/K9UqNY8EhAASMIaJQpJO/9+qqxjohKh5Q==
    ceph_mon_host:
    - 192.168.0.7
    ceph_mon_initial_members:
    - maca25400702875
    ceph_osd_journal_size: ''
    ceph_osd_pool_size: ''
    ceph_public_network: 192.168.0.0/24
    ceph_volumes_key: AQDM/K9UuBymERAAuha7V1M8dYQSm264k/iTjA==
    cinder_admin_vip: 192.168.0.5
    cinder_db_password: 8812907909f917f8633736084d3a0129
    cinder_group: cinder
    cinder_private_vip: 192.168.0.6
    cinder_public_vip: 192.168.0.12
    cinder_user_password: 1a09150e986aa66f8f009c3228d4e30c
    cluster_control_ip: 192.168.0.7
    db_group: db
    db_vip: 192.168.0.13
    glance_admin_vip: 192.168.0.14
    glance_db_password: ec36c219dc5ba33243979e572de66849
    glance_group: glance
    glance_private_vip: 192.168.0.15
    glance_public_vip: 192.168.0.16
    glance_user_password: e42c8dc01dcc0af8921d0ebed6a71240
    heat_admin_vip: 192.168.0.17
    heat_auth_encryption_key: 19b6fd8128046ea84063f35fc10333c2
    heat_cfn_admin_vip: 192.168.0.20
    heat_cfn_enabled: 'true'
    heat_cfn_group: heat_cfn
    heat_cfn_private_vip: 192.168.0.21
    heat_cfn_public_vip: 192.168.0.22
    heat_cfn_user_password: 04059345156ef479b99ef98ed5c905a5
    heat_cloudwatch_enabled: 'true'
    heat_db_password: 43b275acc339089012140f6a6f2f86d6
    heat_group: heat
    heat_private_vip: 192.168.0.18
    heat_public_vip: 192.168.0.19
    heat_user_password: 478c2838e2a6b2e5b136faa9091345bf
    horizon_admin_vip: 192.168.0.23
    horizon_group: horizon
    horizon_private_vip: 192.168.0.24
    horizon_public_vip: 192.168.0.25
    include_amqp: 'true'
    include_ceilometer: 'true'
    include_cinder: 'true'
    include_glance: 'true'
    include_heat: 'true'
    include_horizon: 'true'
    include_keystone: 'true'
    include_mysql: 'true'
    include_neutron: 'true'
    include_nosql: 'true'
    include_nova: 'true'
    include_swift: 'false'
    keystone_admin_vip: 192.168.0.26
    keystone_db_password: 83d8ce81c58705cbd86e43c784324b68
    keystone_group: keystone
    keystone_private_vip: 192.168.0.27
    keystone_public_vip: 192.168.0.28
    keystone_user_password: 068b381719ebc6ce82748639cd89732f
    lb_backend_server_addrs:
    - 192.168.0.7
    lb_backend_server_names:
    - lb-backend-maca25400702875
    loadbalancer_group: loadbalancer
    loadbalancer_vip: 192.168.0.29
    neutron: 'true'
    neutron_admin_vip: 192.168.0.30
    neutron_db_password: 9b096b86e061819438c4549df6a6d6c4
    neutron_group: neutron
    neutron_metadata_proxy_secret: 14fe37d35975eddf0a83d98a8336af25
    neutron_private_vip: 192.168.0.31
    neutron_public_vip: 192.168.0.32
    neutron_user_password: aba1fe41562efb4f119dc16550bfc224
    nosql_group: nosql
    nosql_vip: ''
    nova_admin_vip: 192.168.0.33
    nova_db_password: bcd206daca1522639be5602526aebef7
    nova_group: nova
    nova_private_vip: 192.168.0.34
    nova_public_vip: 192.168.0.35
    nova_user_password: 65a38052d3fc11e26c2263e1c3f3d946
    pcmk_iface: ''
    pcmk_ip: 192.168.0.7
    pcmk_network: ''
    pcmk_server_addrs:
    - 192.168.0.7
    pcmk_server_names:
    - pcmk-maca25400702875
    private_iface: ''
    private_ip: 192.168.0.7
    private_network: ''
    swift_group: swift
    swift_public_vip: 192.168.0.37
    swift_user_password: ''
  quickstack::pacemaker::qpid:
    backend_port: '15672'
    config_file: /etc/qpidd.conf
    connection_backlog: '65535'
    haproxy_timeout: 120s
    log_to_file: UNSET
    manage_service: false
    max_connections: '65535'
    package_ensure: present
    package_name: qpid-cpp-server
    realm: QPID
    service_enable: true
    service_ensure: running
    service_name: qpidd
    worker_threads: '17'
  quickstack::pacemaker::swift:
    memcached_port: '11211'
    swift_internal_vip: ''
    swift_shared_secret: 7e5db7d84f5d53e9bbc42ff79a5858a2
    swift_storage_device: ''
    swift_storage_ips: []
parameters:
  puppetmaster: staypuft.example.com
  domainname: Default domain used for provisioning
  hostgroup: base_RedHat_7/neutron/Controller
  root_pw: $1$urYm+c/m$0jUjUwQmE.ITYj2isvW8K/
  puppet_ca: staypuft.example.com
  foreman_env: production
  owner_name: Admin User
  owner_email: root
  ip: 192.168.0.7
  mac: a2:54:00:70:28:75
  ntp-server: clock.redhat.com
  staypuft_ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABAQD5k1PUc6AjuVLATboyzuhQqmS+XUbx9SPK4agShZ6xOgi6AbxI5GKWMKR3IBt14D32DUcc4TIFtm1mo8Fp8KwfBJI3fOARt0LnYyNhyoNblFT2epw7fw5jFwnuSYKggM4fczLS0Vz+a6/ko4TCU+avv3pz6IBqABGr/4cd6su209Zm5sRqYPdgKrDaKf4kahInxEyaL5tJGBePbXeAbo5foLO05sB4vH2p/h4LhOrbo3q7uJBQjPIvtVLkYBfZpEsZ9k/HFloFU2gglnY8Jy+wkFvWiVVH00OxXOJvuNHiXKqAPSMPC7QUPjgQYwCkp+O6wJQcDbpJFxVTLwYWR1aX
  time-zone: America/New_York
  ui::ceph::fsid: d694dc15-df74-4482-8938-e0672fc3aded
  ui::ceph::images_key: AQDM/K9UqNY8EhAASMIaJQpJO/9+qqxjohKh5Q==
  ui::ceph::volumes_key: AQDM/K9UuBymERAAuha7V1M8dYQSm264k/iTjA==
  ui::cinder::backend_ceph: 'true'
  ui::cinder::backend_eqlx: 'false'
  ui::cinder::backend_lvm: 'false'
  ui::cinder::backend_nfs: 'false'
  ui::cinder::rbd_secret_uuid: 888b90d3-c675-4cea-9f30-889713a7128a
  ui::deployment::amqp_provider: rabbitmq
  ui::deployment::networking: neutron
  ui::deployment::platform: rhel7
  ui::glance::driver_backend: ceph
  ui::neutron::core_plugin: ml2
  ui::neutron::ml2_cisco_nexus: 'false'
  ui::neutron::ml2_l2population: 'true'
  ui::neutron::ml2_openvswitch: 'true'
  ui::neutron::network_segmentation: vxlan
  ui::nova::network_manager: FlatDHCPManager
  ui::passwords::admin: a9b2ef4b06c883a4c206aba72ea9d9c9
  ui::passwords::amqp: 8f27fb30168ed9de407f2bb78772897c
  ui::passwords::ceilometer_metering_secret: c9206fce6db087eb75b4f497e2cdcab3
  ui::passwords::ceilometer_user: c632d2da7f80dfc1aaa30451597e75ff
  ui::passwords::cinder_db: 8812907909f917f8633736084d3a0129
  ui::passwords::cinder_user: 1a09150e986aa66f8f009c3228d4e30c
  ui::passwords::glance_db: ec36c219dc5ba33243979e572de66849
  ui::passwords::glance_user: e42c8dc01dcc0af8921d0ebed6a71240
  ui::passwords::heat_auth_encrypt_key: 19b6fd8128046ea84063f35fc10333c2
  ui::passwords::heat_cfn_user: 04059345156ef479b99ef98ed5c905a5
  ui::passwords::heat_db: 43b275acc339089012140f6a6f2f86d6
  ui::passwords::heat_user: 478c2838e2a6b2e5b136faa9091345bf
  ui::passwords::horizon_secret_key: 7faf63edc36f09a01a33a5074c1d6c56
  ui::passwords::keystone_admin_token: 6d6f23bae88408f8f8e98dc4023a913f
  ui::passwords::keystone_db: 83d8ce81c58705cbd86e43c784324b68
  ui::passwords::keystone_user: 068b381719ebc6ce82748639cd89732f
  ui::passwords::mode: random
  ui::passwords::mysql_root: c443b0024dbb909ae7330701ced94586
  ui::passwords::neutron_db: 9b096b86e061819438c4549df6a6d6c4
  ui::passwords::neutron_metadata_proxy_secret: 14fe37d35975eddf0a83d98a8336af25
  ui::passwords::neutron_user: aba1fe41562efb4f119dc16550bfc224
  ui::passwords::nova_db: bcd206daca1522639be5602526aebef7
  ui::passwords::nova_user: 65a38052d3fc11e26c2263e1c3f3d946
  ui::passwords::swift_shared_secret: 7e5db7d84f5d53e9bbc42ff79a5858a2
  ui::passwords::swift_user: 6dd2d45b2034efa9577f85d594212ff4
environment: production

Comment 2 Alexander Chuzhoy 2015-01-09 17:44:22 UTC
Created attachment 978324 [details]
foreman logs

Comment 3 Alexander Chuzhoy 2015-01-09 17:45:42 UTC
Created attachment 978337 [details]
logs - controller

Comment 7 Mike Burns 2015-01-12 23:15:43 UTC
The root cause is believed to be the fact that the ceph cluster is not up when we try to start glance.

Suggested path forward is to partially fix this so deployments are successful with a couple of manual steps after the completion of the deployment.

Code change:  If the glance server is set to ceph, disable glance by setting the include_glance parameter to False

This will allow the deployment to succeed, though glance will be disabled.

The manual steps for the user to follow are then:  

1.  create the ceph cluster (ceph-deploy).  This step is already manual
2.  go back to the deployment in the installer UI.  Go into Advanced Parameters and under the Controller section, set "include glance" to true
3.  run puppet through the installer UI on the controller hosts again to configure and start glance.

Comment 8 Mike Burns 2015-01-13 14:43:28 UTC
https://github.com/theforeman/staypuft/pull/403

Comment 9 Mike Burns 2015-01-13 19:55:55 UTC
The part that is still broken will be moved to a separate bugzilla where the workaround will be documented.

Comment 11 Alexander Chuzhoy 2015-01-14 15:29:03 UTC
Verified:


Environment:
ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el7ost.noarch
rhel-osp-installer-client-0.5.5-1.el7ost.noarch
ruby193-rubygem-staypuft-0.5.11-1.el7ost.noarch
openstack-foreman-installer-3.0.9-1.el7ost.noarch
openstack-puppet-modules-2014.2.8-1.el7ost.noarch
rhel-osp-installer-0.5.5-1.el7ost.noarch



The deployment completes successfuly with glance+cinder using ceph as backend driver.

Comment 13 errata-xmlrpc 2015-02-09 15:19:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-0156.html


Note You need to log in before you can comment on or make changes to this bug.