Bug 1183815 - rubygem-staypuft: nonHA neutron with Local File as the driver backend for glance: pcs status complaints with "fs-varlibglanceimages_monitor_0 on pcmk-maca25400702875 'not configured' "
Summary: rubygem-staypuft: nonHA neutron with Local File as the driver backend for gla...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rubygem-staypuft
Version: unspecified
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z1
: Installer
Assignee: Scott Seago
QA Contact: Omri Hochman
URL:
Whiteboard: n1kv
: 1176674 1187977 (view as bug list)
Depends On:
Blocks: 1177026
TreeView+ depends on / blocked
 
Reported: 2015-01-19 21:40 UTC by Alexander Chuzhoy
Modified: 2023-02-22 23:02 UTC (History)
15 users (show)

Fixed In Version: ruby193-rubygem-staypuft-0.5.18-1.el7ost
Doc Type: Bug Fix
Doc Text:
The installer incorrectly sets a parameter when users choose Glance Local File storage. This caused deployments to fail. This fix sets the parameter correctly. Deployments are now successful.
Clone Of:
Environment:
Last Closed: 2015-03-05 18:19:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
foreman logs (57.65 KB, application/x-gzip)
2015-01-19 21:46 UTC, Alexander Chuzhoy
no flags Details
logs - controller (8.65 MB, application/x-gzip)
2015-01-19 21:49 UTC, Alexander Chuzhoy
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 1365813 0 None None None Never
Red Hat Product Errata RHBA-2015:0641 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform Installer Bug Fix Advisory 2015-03-05 23:15:51 UTC

Description Alexander Chuzhoy 2015-01-19 21:40:28 UTC
rubygem-staypuft: nonHA neutron with Local File as the driver backend for glance: pcs status complaints with "fs-varlibglanceimages_monitor_0 on pcmk-maca25400702875 'not configured' "


Environment:
rhel-osp-installer-client-0.5.5-2.el7ost.noarch
ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el7ost.noarch
openstack-puppet-modules-2014.2.8-1.el7ost.noarch
openstack-foreman-installer-3.0.10-2.el7ost.noarch
rhel-osp-installer-0.5.5-2.el7ost.noarch
ruby193-rubygem-staypuft-0.5.12-1.el7ost.noarch


Steps to reproduce:
1. install rhel-osp-installer
2. deploy nonHA neutron  with Local File as the driver backend for glance.
3. Wait until the deployment completes successfully.
4. Attempt to run "glance image-list"

Result:
Error finding address for http://192.168.0.16:9292/v1/images/detail?sort_key=name&sort_dir=asc&limit=20: HTTPConnectionPool(host='192.168.0.16', port=9292): Max retries exceeded with url: /v1/images/detail?sort_key=name&sort_dir=asc&limit=20 (Caused by <class 'httplib.BadStatusLine'>: '')

Running pcs status reports the following under failed actions:
 

fs-varlibglanceimages_monitor_0 on pcmk-maca25400702875 'not configured' (6): call=419, status=complete, last-rc-change='Mon Jan 19 16:18:44 2015', queued=40ms, exec=1ms



Expected result:

Should be able to launch an instance.

Comment 2 Alexander Chuzhoy 2015-01-19 21:46:21 UTC
Created attachment 981641 [details]
foreman logs

Comment 3 Alexander Chuzhoy 2015-01-19 21:49:20 UTC
Created attachment 981643 [details]
logs - controller

Comment 4 Alexander Chuzhoy 2015-01-19 21:53:17 UTC
The yaml for the controller:


---
classes:
  foreman::plugin::staypuft_client:
    staypuft_ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABAQDcxASXUMOf8nJ0siJSxQjC1W3O6kLBxN+CuRSjjZZTm8qUKehd23PRnm7bigRFQWFy6WoWqJRbBOzZTXbEsYu+dG48B2/tQBBe6mLae9Tmwyj8cvwM4Dzdv2+TBuifYSVH0xYNSUuOpuqyWgshvtsQXdaORK4mk5qJ8OFFC0z1oPpNVv23vRFxzwIg6judLW/FoiEUie+N33R7feq+6P42DnqxcyO5PXgTYYf4ZP2en/D4ddCiWHEdOvZ3P/7AOGdqyFmlRQoK/iHwnTR5Fx0gi7wOS7LYdhoLrYUGlT5zXtACvXskgHhrbfUqkJhT4xg7ECcZyQU+dDgNo2X5QjPh
  foreman::puppet::agent::service:
    runmode: service
  quickstack::openstack_common: 
  quickstack::pacemaker::ceilometer:
    ceilometer_metering_secret: e56f87a8349b9a27960502fa2be4ebea
    db_port: '27017'
    memcached_port: '11211'
    verbose: 'true'
  quickstack::pacemaker::cinder:
    backend_eqlx: 'false'
    backend_eqlx_name:
    - eqlx
    backend_glusterfs: false
    backend_glusterfs_name: glusterfs
    backend_iscsi: 'true'
    backend_iscsi_name: iscsi
    backend_nfs: 'false'
    backend_nfs_name: nfs
    backend_rbd: 'false'
    backend_rbd_name: rbd
    create_volume_types: true
    db_name: cinder
    db_ssl: false
    db_ssl_ca: ''
    db_user: cinder
    debug: false
    enabled: true
    eqlx_chap_login: []
    eqlx_chap_password: []
    eqlx_group_name: []
    eqlx_pool: []
    eqlx_use_chap: []
    glusterfs_shares: []
    log_facility: LOG_USER
    multiple_backends: 'false'
    nfs_mount_options: nosharecache
    nfs_shares:
    - ''
    qpid_heartbeat: '60'
    rbd_ceph_conf: /etc/ceph/ceph.conf
    rbd_flatten_volume_from_snapshot: 'false'
    rbd_max_clone_depth: '5'
    rbd_pool: volumes
    rbd_secret_uuid: 9cbe187a-e1d6-4a02-a2d0-80d1935bd649
    rbd_user: volumes
    rpc_backend: cinder.openstack.common.rpc.impl_kombu
    san_ip: []
    san_login: []
    san_password: []
    san_thin_provision: []
    use_syslog: false
    verbose: 'true'
    volume: true
  quickstack::pacemaker::common:
    fence_ipmilan_address: ''
    fence_ipmilan_expose_lanplus: ''
    fence_ipmilan_hostlist: ''
    fence_ipmilan_host_to_address: []
    fence_ipmilan_interval: 60s
    fence_ipmilan_lanplus_options: ''
    fence_ipmilan_password: ''
    fence_ipmilan_username: ''
    fence_xvm_key_file_password: ''
    fence_xvm_manage_key_file: 'false'
    fence_xvm_port: ''
    fencing_type: disabled
    pacemaker_cluster_name: openstack
  quickstack::pacemaker::galera:
    galera_monitor_password: monitor_pass
    galera_monitor_username: monitor_user
    max_connections: '1024'
    mysql_root_password: de81ed02754e2229450e32ced88edd26
    open_files_limit: '-1'
    wsrep_cluster_members:
    - 192.168.0.7
    wsrep_cluster_name: galera_cluster
    wsrep_ssl: true
    wsrep_ssl_cert: /etc/pki/galera/galera.crt
    wsrep_ssl_key: /etc/pki/galera/galera.key
    wsrep_sst_method: rsync
    wsrep_sst_password: sst_pass
    wsrep_sst_username: sst_user
  quickstack::pacemaker::glance:
    backend: file
    db_name: glance
    db_ssl: false
    db_ssl_ca: ''
    db_user: glance
    debug: false
    filesystem_store_datadir: /var/lib/glance/images/
    log_facility: LOG_USER
    pcmk_fs_device: ''
    pcmk_fs_dir: /var/lib/glance/images
    pcmk_fs_manage: 'true'
    pcmk_fs_options: ''
    pcmk_fs_type: ''
    pcmk_swift_is_local: true
    rbd_store_pool: images
    rbd_store_user: images
    sql_idle_timeout: '3600'
    swift_store_auth_address: http://127.0.0.1:5000/v2.0/
    swift_store_key: ''
    swift_store_user: ''
    use_syslog: false
    verbose: 'true'
  quickstack::pacemaker::heat:
    db_name: heat
    db_ssl: false
    db_ssl_ca: ''
    db_user: heat
    debug: false
    log_facility: LOG_USER
    qpid_heartbeat: '60'
    use_syslog: false
    verbose: 'true'
  quickstack::pacemaker::horizon:
    horizon_ca: /etc/ipa/ca.crt
    horizon_cert: /etc/pki/tls/certs/PUB_HOST-horizon.crt
    horizon_key: /etc/pki/tls/private/PUB_HOST-horizon.key
    keystone_default_role: _member_
    memcached_port: '11211'
    secret_key: bec6b406082d7966548529498401ddb4
    verbose: 'true'
  quickstack::pacemaker::keystone:
    admin_email: admin
    admin_password: 891990f262c6698b77c62ae769c371e6
    admin_tenant: admin
    admin_token: cb9262caef80557944b61a4029601699
    ceilometer: 'false'
    cinder: 'true'
    db_name: keystone
    db_ssl: 'false'
    db_ssl_ca: ''
    db_type: mysql
    db_user: keystone
    debug: 'false'
    enabled: 'true'
    glance: 'true'
    heat: 'true'
    heat_cfn: 'false'
    idle_timeout: '200'
    keystonerc: 'true'
    log_facility: LOG_USER
    nova: 'true'
    public_protocol: http
    region: RegionOne
    swift: 'false'
    token_driver: keystone.token.backends.sql.Token
    token_format: PKI
    use_syslog: 'false'
    verbose: 'true'
  quickstack::pacemaker::load_balancer: 
  quickstack::pacemaker::memcached: 
  quickstack::pacemaker::neutron:
    allow_overlapping_ips: true
    cisco_nexus_plugin: neutron.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
    cisco_vswitch_plugin: neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
    core_plugin: neutron.plugins.ml2.plugin.Ml2Plugin
    enabled: true
    enable_tunneling: 'true'
    external_network_bridge: ''
    l3_ha: false
    ml2_flat_networks:
    - ! '*'
    ml2_mechanism_drivers:
    - openvswitch
    - l2population
    ml2_network_vlan_ranges:
    - physnet-external
    ml2_security_group: 'True'
    ml2_tenant_network_types:
    - vxlan
    ml2_tunnel_id_ranges:
    - 10:1000
    ml2_type_drivers:
    - local
    - flat
    - vlan
    - gre
    - vxlan
    ml2_vxlan_group: 224.0.0.1
    n1kv_plugin_additional_params:
      default_policy_profile: default-pp
      network_node_policy_profile: default-pp
      poll_duration: '10'
      http_pool_size: '4'
      http_timeout: '120'
      firewall_driver: neutron.agent.firewall.NoopFirewallDriver
      enable_sync_on_start: 'True'
    n1kv_vsm_ip: ''
    n1kv_vsm_password: ''
    network_device_mtu: ''
    neutron_conf_additional_params:
      default_quota: default
      quota_network: default
      quota_subnet: default
      quota_port: default
      quota_security_group: default
      quota_security_group_rule: default
      network_auto_schedule: default
    nexus_config: {}
    nova_conf_additional_params:
      quota_instances: default
      quota_cores: default
      quota_ram: default
      quota_floating_ips: default
      quota_fixed_ips: default
      quota_driver: default
    ovs_bridge_mappings:
    - physnet-external:br-ex
    ovs_bridge_uplinks:
    - br-ex:ens8
    ovs_tunnel_iface: ens7
    ovs_tunnel_network: ''
    ovs_tunnel_types:
    - vxlan
    ovs_vlan_ranges:
    - physnet-external
    ovs_vxlan_udp_port: '4789'
    security_group_api: neutron
    tenant_network_type: vlan
    tunnel_id_ranges: 1:1000
    verbose: 'true'
    veth_mtu: ''
  quickstack::pacemaker::nosql:
    nosql_port: '27017'
  quickstack::pacemaker::nova:
    auto_assign_floating_ip: 'true'
    db_name: nova
    db_user: nova
    default_floating_pool: nova
    force_dhcp_release: 'false'
    image_service: nova.image.glance.GlanceImageService
    memcached_port: '11211'
    multi_host: 'true'
    neutron_metadata_proxy_secret: 73143b93c4dd0b2ec1e875d354a94729
    qpid_heartbeat: '60'
    rpc_backend: nova.openstack.common.rpc.impl_kombu
    scheduler_host_subset_size: '30'
    verbose: 'true'
  quickstack::pacemaker::params:
    amqp_group: amqp
    amqp_password: ce0826186c9945ed34213b79177553e5
    amqp_port: '5672'
    amqp_provider: rabbitmq
    amqp_username: openstack
    amqp_vip: 192.168.0.36
    ceilometer_admin_vip: 192.168.0.2
    ceilometer_group: ceilometer
    ceilometer_private_vip: 192.168.0.3
    ceilometer_public_vip: 192.168.0.4
    ceilometer_user_password: 1c8018856065508286c625b2c530741a
    ceph_cluster_network: 192.168.0.0/24
    ceph_fsid: 92153eec-561b-498f-b431-ac727fbccbb1
    ceph_images_key: AQCYTL1UAG46FhAAseaoJCrkRIEtL1ZS7ppr0Q==
    ceph_mon_host:
    - 192.168.0.7
    ceph_mon_initial_members:
    - maca25400702875
    ceph_osd_journal_size: ''
    ceph_osd_pool_size: ''
    ceph_public_network: 192.168.0.0/24
    ceph_volumes_key: AQCYTL1U6EbZFRAAkmS7qkQ6/7/WqgMNrNDggA==
    cinder_admin_vip: 192.168.0.5
    cinder_db_password: 88e6aaf6630a2efbd1c49879d1d16db5
    cinder_group: cinder
    cinder_private_vip: 192.168.0.6
    cinder_public_vip: 192.168.0.12
    cinder_user_password: 8728567aacc70895a68cb72d24353eb9
    cluster_control_ip: 192.168.0.7
    db_group: db
    db_vip: 192.168.0.13
    glance_admin_vip: 192.168.0.14
    glance_db_password: b257fb625d9c45baba2491cbeabd7d9e
    glance_group: glance
    glance_private_vip: 192.168.0.15
    glance_public_vip: 192.168.0.16
    glance_user_password: 7925723dd589d4a7e7ea27f782caa118
    heat_admin_vip: 192.168.0.17
    heat_auth_encryption_key: eeeeadb2152f5621b091bd183f2d47b0
    heat_cfn_admin_vip: 192.168.0.20
    heat_cfn_enabled: 'true'
    heat_cfn_group: heat_cfn
    heat_cfn_private_vip: 192.168.0.21
    heat_cfn_public_vip: 192.168.0.22
    heat_cfn_user_password: 751d566743e2ca715adf4ad9d76224f9
    heat_cloudwatch_enabled: 'true'
    heat_db_password: 77eebebed3279d73c6e1673921d1f621
    heat_group: heat
    heat_private_vip: 192.168.0.18
    heat_public_vip: 192.168.0.19
    heat_user_password: 8e746c3c10cc81ecd0ca0a3e81b4dff6
    horizon_admin_vip: 192.168.0.23
    horizon_group: horizon
    horizon_private_vip: 192.168.0.24
    horizon_public_vip: 192.168.0.25
    include_amqp: 'true'
    include_ceilometer: 'true'
    include_cinder: 'true'
    include_glance: 'true'
    include_heat: 'true'
    include_horizon: 'true'
    include_keystone: 'true'
    include_mysql: 'true'
    include_neutron: 'true'
    include_nosql: 'true'
    include_nova: 'true'
    include_swift: 'false'
    keystone_admin_vip: 192.168.0.26
    keystone_db_password: bd0b218eae5b6222d3691ef0e2186212
    keystone_group: keystone
    keystone_private_vip: 192.168.0.27
    keystone_public_vip: 192.168.0.28
    keystone_user_password: ee2b50e2ef9e40fb99f6a1b40a923179
    lb_backend_server_addrs:
    - 192.168.0.7
    lb_backend_server_names:
    - lb-backend-maca25400702875
    loadbalancer_group: loadbalancer
    loadbalancer_vip: 192.168.0.29
    neutron: 'true'
    neutron_admin_vip: 192.168.0.30
    neutron_db_password: 77070292cd0c3eb0b463ab82dbd44d1c
    neutron_group: neutron
    neutron_metadata_proxy_secret: 73143b93c4dd0b2ec1e875d354a94729
    neutron_private_vip: 192.168.0.31
    neutron_public_vip: 192.168.0.32
    neutron_user_password: 82bce6a72dde340b1e4b49e0f4b978d1
    nosql_group: nosql
    nosql_vip: ''
    nova_admin_vip: 192.168.0.33
    nova_db_password: d6570e563ec3c477bf64c4fdd678293a
    nova_group: nova
    nova_private_vip: 192.168.0.34
    nova_public_vip: 192.168.0.35
    nova_user_password: 152a0fb6c9ef074a1f4dd8c840d88b27
    pcmk_iface: ''
    pcmk_ip: 192.168.0.7
    pcmk_network: ''
    pcmk_server_addrs:
    - 192.168.0.7
    pcmk_server_names:
    - pcmk-maca25400702875
    private_iface: ''
    private_ip: 192.168.0.7
    private_network: ''
    swift_group: swift
    swift_public_vip: 192.168.0.37
    swift_user_password: ''
  quickstack::pacemaker::qpid:
    backend_port: '15672'
    config_file: /etc/qpidd.conf
    connection_backlog: '65535'
    haproxy_timeout: 120s
    log_to_file: UNSET
    manage_service: false
    max_connections: '65535'
    package_ensure: present
    package_name: qpid-cpp-server
    realm: QPID
    service_enable: true
    service_ensure: running
    service_name: qpidd
    worker_threads: '17'
  quickstack::pacemaker::swift:
    memcached_port: '11211'
    swift_internal_vip: ''
    swift_shared_secret: 450189bc8780f72b242506033d9eb473
    swift_storage_device: ''
    swift_storage_ips: []
parameters:
  puppetmaster: staypuft.example.com
  domainname: Default domain used for provisioning
  hostgroup: base_RedHat_7/neutron/Controller
  root_pw: $1$CmGevrjz$0E/ebTjsMrDjCw7wW.HjN.
  puppet_ca: staypuft.example.com
  foreman_env: production
  owner_name: Admin User
  owner_email: root
  ip: 192.168.0.7
  mac: a2:54:00:70:28:75
  ntp-server: clock.redhat.com
  staypuft_ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABAQDcxASXUMOf8nJ0siJSxQjC1W3O6kLBxN+CuRSjjZZTm8qUKehd23PRnm7bigRFQWFy6WoWqJRbBOzZTXbEsYu+dG48B2/tQBBe6mLae9Tmwyj8cvwM4Dzdv2+TBuifYSVH0xYNSUuOpuqyWgshvtsQXdaORK4mk5qJ8OFFC0z1oPpNVv23vRFxzwIg6judLW/FoiEUie+N33R7feq+6P42DnqxcyO5PXgTYYf4ZP2en/D4ddCiWHEdOvZ3P/7AOGdqyFmlRQoK/iHwnTR5Fx0gi7wOS7LYdhoLrYUGlT5zXtACvXskgHhrbfUqkJhT4xg7ECcZyQU+dDgNo2X5QjPh
  time-zone: America/New_York
  ui::ceph::fsid: 92153eec-561b-498f-b431-ac727fbccbb1
  ui::ceph::images_key: AQCYTL1UAG46FhAAseaoJCrkRIEtL1ZS7ppr0Q==
  ui::ceph::volumes_key: AQCYTL1U6EbZFRAAkmS7qkQ6/7/WqgMNrNDggA==
  ui::cinder::backend_ceph: 'false'
  ui::cinder::backend_eqlx: 'false'
  ui::cinder::backend_lvm: 'true'
  ui::cinder::backend_nfs: 'false'
  ui::cinder::rbd_secret_uuid: 9cbe187a-e1d6-4a02-a2d0-80d1935bd649
  ui::deployment::amqp_provider: rabbitmq
  ui::deployment::networking: neutron
  ui::deployment::platform: rhel7
  ui::glance::driver_backend: local
  ui::neutron::core_plugin: ml2
  ui::neutron::ml2_cisco_nexus: 'false'
  ui::neutron::ml2_l2population: 'true'
  ui::neutron::ml2_openvswitch: 'true'
  ui::neutron::network_segmentation: vxlan
  ui::nova::network_manager: FlatDHCPManager
  ui::passwords::admin: 891990f262c6698b77c62ae769c371e6
  ui::passwords::amqp: ce0826186c9945ed34213b79177553e5
  ui::passwords::ceilometer_metering_secret: e56f87a8349b9a27960502fa2be4ebea
  ui::passwords::ceilometer_user: 1c8018856065508286c625b2c530741a
  ui::passwords::cinder_db: 88e6aaf6630a2efbd1c49879d1d16db5
  ui::passwords::cinder_user: 8728567aacc70895a68cb72d24353eb9
  ui::passwords::glance_db: b257fb625d9c45baba2491cbeabd7d9e
  ui::passwords::glance_user: 7925723dd589d4a7e7ea27f782caa118
  ui::passwords::heat_auth_encrypt_key: eeeeadb2152f5621b091bd183f2d47b0
  ui::passwords::heat_cfn_user: 751d566743e2ca715adf4ad9d76224f9
  ui::passwords::heat_db: 77eebebed3279d73c6e1673921d1f621
  ui::passwords::heat_user: 8e746c3c10cc81ecd0ca0a3e81b4dff6
  ui::passwords::horizon_secret_key: bec6b406082d7966548529498401ddb4
  ui::passwords::keystone_admin_token: cb9262caef80557944b61a4029601699
  ui::passwords::keystone_db: bd0b218eae5b6222d3691ef0e2186212
  ui::passwords::keystone_user: ee2b50e2ef9e40fb99f6a1b40a923179
  ui::passwords::mode: random
  ui::passwords::mysql_root: de81ed02754e2229450e32ced88edd26
  ui::passwords::neutron_db: 77070292cd0c3eb0b463ab82dbd44d1c
  ui::passwords::neutron_metadata_proxy_secret: 73143b93c4dd0b2ec1e875d354a94729
  ui::passwords::neutron_user: 82bce6a72dde340b1e4b49e0f4b978d1
  ui::passwords::nova_db: d6570e563ec3c477bf64c4fdd678293a
  ui::passwords::nova_user: 152a0fb6c9ef074a1f4dd8c840d88b27
  ui::passwords::swift_shared_secret: 450189bc8780f72b242506033d9eb473
  ui::passwords::swift_user: cdda89a76b5c5fb868edf12cb5380126
environment: production

Comment 5 Alexander Chuzhoy 2015-01-19 22:18:58 UTC
Reproduced with nonHA nova with the glance driver backed set to local file.

Comment 6 Amit Ugol 2015-02-02 16:31:29 UTC
*** Bug 1187977 has been marked as a duplicate of this bug. ***

Comment 7 Jason Guiditta 2015-02-04 18:36:05 UTC
This is simply a param value change needed when using staypuft.  If 'local file' is selected, staypuft should set $pcmk_fs_manage=false.  I tested this by updating the param value and removing the errant fs-glance pcs resource (which would not have been created if the above value were false on deploy).  Errors are gone, and glance image-list returns an empty list, as expected.  I propose moving this to staypuft so the needed logic can be added, already described above to Scott.

Comment 10 Mike Burns 2015-02-12 19:43:05 UTC
*** Bug 1176674 has been marked as a duplicate of this bug. ***

Comment 11 Omri Hochman 2015-02-18 17:48:32 UTC
verified with ruby193-rubygem-staypuft-0.5.19-1.el7ost.noarch .

Comment 15 errata-xmlrpc 2015-03-05 18:19:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-0641.html


Note You need to log in before you can comment on or make changes to this bug.