Bug 1179915 - rubygem-staypuft: puppet reports error: "Execution of '/usr/bin/nova-manage network create novanetwork 192.168.32.0/24 6 32 --vlan_start 10' returned 1: Command failed, please check log for more info"
Summary: rubygem-staypuft: puppet reports error: "Execution of '/usr/bin/nova-manage n...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rubygem-staypuft
Version: unspecified
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ga
: Installer
Assignee: Jiri Stransky
QA Contact: Alexander Chuzhoy
URL:
Whiteboard:
Depends On:
Blocks: 1177026
TreeView+ depends on / blocked
 
Reported: 2015-01-07 19:30 UTC by Alexander Chuzhoy
Modified: 2023-02-22 23:02 UTC (History)
7 users (show)

Fixed In Version: ruby193-rubygem-staypuft-0.5.0-11.el7ost
Doc Type: Bug Fix
Doc Text:
When configuring Compute hosts with Compute Networking in parallel, errors were reported by Puppet because multiple nodes tried to create the same network simultaneously. With this update, when Compute nodes are deployed with Compute Networking, one compute host is configured first and creates the network. Then the rest of the compute hosts are configured in parallel. As a result, Puppet error because of simultaneous attempts to create a network no longer appear.
Clone Of:
Environment:
Last Closed: 2015-02-09 15:19:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
logs - controller3 (7.34 MB, application/x-gzip)
2015-01-07 19:42 UTC, Alexander Chuzhoy
no flags Details
logs - controller2 (8.42 MB, application/x-gzip)
2015-01-07 19:45 UTC, Alexander Chuzhoy
no flags Details
logs - controller1 (7.32 MB, application/x-gzip)
2015-01-07 19:47 UTC, Alexander Chuzhoy
no flags Details
foreman logs (65.81 KB, application/x-gzip)
2015-01-07 19:49 UTC, Alexander Chuzhoy
no flags Details
logs from compute1 (6.78 MB, application/x-gzip)
2015-01-07 19:50 UTC, Alexander Chuzhoy
no flags Details
logs from compute2 (6.77 MB, application/x-gzip)
2015-01-07 19:52 UTC, Alexander Chuzhoy
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:0156 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform Installer Bug Fix Advisory 2015-02-09 20:13:39 UTC

Description Alexander Chuzhoy 2015-01-07 19:30:27 UTC
rubygem-staypuft: puppet reports error: "Execution of '/usr/bin/nova-manage network create novanetwork 192.168.32.0/24 6 32 --vlan_start 10' returned 1: Command failed, please check log for more info"


Environment:

ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el7ost.noarch
openstack-foreman-installer-3.0.8-1.el7ost.noarch
ruby193-rubygem-staypuft-0.5.9-1.el7ost.noarch
rhel-osp-installer-client-0.5.4-1.el7ost.noarch
openstack-puppet-modules-2014.2.8-1.el7ost.noarch
rhel-osp-installer-0.5.4-1.el7ost.noarch


Steps to reproduce:
1. Deploy HAnova deployment with 3 controllers + 2 computes.

Result:

The deployment completes successfully. Checking the puppet reports I see the second report from one compute reports the following error:
change from absent to present failed: Execution of '/usr/bin/nova-manage network create novanetwork 192.168.32.0/24 6 32 --vlan_start 10' returned 1: Command failed, please check log for more info

All other reports show no errors and the deployment completes successfully.


Expected result:
No such error in reports.

Comment 1 Alexander Chuzhoy 2015-01-07 19:36:38 UTC
yaml of the compute where the error is reported:
---
classes:
  foreman::plugin::staypuft_client:
    staypuft_ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABAQCsN0JOzz/xcBD+aMbnrvhQMmUF6KqtfDc8qikh9xQv9yDx6NJpsM48kPWC7mZRurtqWYLI+oKJNP3qw+GfeMIUODNx4kGT0RmFMYJkGUlGu4Ljtj/bn+nT8Vmr22Ny9TudOp9YjFnLZ0sB2Q7f+Zk+Gxhq6v6UJVnmEyOHzfsuuUmigU6eGimxN1r3VqqIpYvXIC8zclsUFNCu7/4RiOVx9Ybqi3+kFA8XDCjSlsRpeRasU/D3RaI0S4O8EBCsqwLFaqlTbVjCfP6CRFbxLkEufagH6pO60QxOYMB5M1FJwg2UOwAbQHcStLfMc4rQu1IuKwwhxuH8nWC+FV2flYdV
  foreman::puppet::agent::service:
    runmode: service
  quickstack::nova_network::compute:
    admin_password: e9531aa90c53700ceffed893965d75df
    amqp_host: 192.168.0.36
    amqp_password: 2d4fdfc1fd4d63e6431926a17af04951
    amqp_port: '5672'
    amqp_provider: rabbitmq
    amqp_ssl_port: '5671'
    amqp_username: openstack
    auth_host: 192.168.0.26
    auto_assign_floating_ip: 'true'
    ceilometer: true
    ceilometer_metering_secret: fc1472f8198855323b54983ff54e0715
    ceilometer_user_password: 56721998089505353c91f2b49b4e3606
    ceph_cluster_network: 192.168.0.0/24
    ceph_fsid: efcc1d30-d0af-4e3a-9414-1df617109f36
    ceph_images_key: AQCqZ61UGCvFCxAATBQWUl3d+Rngi2vNss6WyA==
    ceph_mon_host:
    - 192.168.0.8
    - 192.168.0.9
    - 192.168.0.7
    ceph_mon_initial_members:
    - maca25400702876
    - maca25400702877
    - maca25400702875
    ceph_osd_journal_size: ''
    ceph_osd_pool_default_size: ''
    ceph_public_network: 192.168.0.0/24
    ceph_volumes_key: AQCqZ61UKCFlCxAAGEiqNf6bJIF72qDRbaCixQ==
    cinder_backend_gluster: false
    cinder_backend_nfs: 'true'
    cinder_backend_rbd: 'false'
    glance_backend_rbd: 'false'
    glance_host: 192.168.0.15
    libvirt_images_rbd_ceph_conf: /etc/ceph/ceph.conf
    libvirt_images_rbd_pool: volumes
    libvirt_images_type: rbd
    libvirt_inject_key: 'false'
    libvirt_inject_password: 'false'
    mysql_ca: /etc/ipa/ca.crt
    mysql_host: 192.168.0.13
    network_create_networks: true
    network_device_mtu: ''
    network_fixed_range: 192.168.32.0/24
    network_floating_range: 10.8.30.90/30
    network_manager: VlanManager
    network_network_size: '32'
    network_num_networks: '6'
    network_overrides:
      force_dhcp_release: false
      vlan_start: '10'
    network_private_iface: ens7
    network_private_network: ''
    network_public_iface: ens8
    network_public_network: ''
    nova_db_password: eb84f464d1186452188c250931688fb2
    nova_host: 192.168.0.35
    nova_multi_host: 'true'
    nova_user_password: e19acd87d70d5f7c34a2342832f181ef
    private_iface: ''
    private_ip: 192.168.0.10
    private_network: ''
    rbd_secret_uuid: ba6d33a4-199c-470e-ba1c-b8bce5f9aa9b
    rbd_user: volumes
    ssl: 'false'
    verbose: 'true'
parameters:
  puppetmaster: staypuft.example.com
  domainname: Default domain used for provisioning
  hostgroup: base_RedHat_7/nova/Compute (Nova)
  root_pw: $1$QsrRj7I0$EZHInySd4P/R2BylYhyMT0
  puppet_ca: staypuft.example.com
  foreman_env: production
  owner_name: Admin User
  owner_email: root
  ip: 192.168.0.10
  mac: a2:54:00:86:80:96
  ntp-server: clock.redhat.com
  staypuft_ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABAQCsN0JOzz/xcBD+aMbnrvhQMmUF6KqtfDc8qikh9xQv9yDx6NJpsM48kPWC7mZRurtqWYLI+oKJNP3qw+GfeMIUODNx4kGT0RmFMYJkGUlGu4Ljtj/bn+nT8Vmr22Ny9TudOp9YjFnLZ0sB2Q7f+Zk+Gxhq6v6UJVnmEyOHzfsuuUmigU6eGimxN1r3VqqIpYvXIC8zclsUFNCu7/4RiOVx9Ybqi3+kFA8XDCjSlsRpeRasU/D3RaI0S4O8EBCsqwLFaqlTbVjCfP6CRFbxLkEufagH6pO60QxOYMB5M1FJwg2UOwAbQHcStLfMc4rQu1IuKwwhxuH8nWC+FV2flYdV
  time-zone: America/New_York
  ui::ceph::fsid: efcc1d30-d0af-4e3a-9414-1df617109f36
  ui::ceph::images_key: AQCqZ61UGCvFCxAATBQWUl3d+Rngi2vNss6WyA==
  ui::ceph::volumes_key: AQCqZ61UKCFlCxAAGEiqNf6bJIF72qDRbaCixQ==
  ui::cinder::backend_ceph: 'false'
  ui::cinder::backend_eqlx: 'false'
  ui::cinder::backend_lvm: 'false'
  ui::cinder::backend_nfs: 'true'
  ui::cinder::nfs_uri: 192.168.0.1:/cinder
  ui::cinder::rbd_secret_uuid: ba6d33a4-199c-470e-ba1c-b8bce5f9aa9b
  ui::deployment::amqp_provider: rabbitmq
  ui::deployment::networking: nova
  ui::deployment::platform: rhel7
  ui::glance::driver_backend: nfs
  ui::glance::nfs_network_path: 192.168.0.1:/glance
  ui::neutron::core_plugin: ml2
  ui::neutron::ml2_cisco_nexus: 'false'
  ui::neutron::ml2_l2population: 'true'
  ui::neutron::ml2_openvswitch: 'true'
  ui::neutron::network_segmentation: vxlan
  ui::nova::network_manager: VlanManager
  ui::nova::private_fixed_range: 192.168.32.0/24
  ui::nova::public_floating_range: 10.8.30.90/30
  ui::nova::vlan_range: '10:15'
  ui::passwords::admin: e9531aa90c53700ceffed893965d75df
  ui::passwords::amqp: 2d4fdfc1fd4d63e6431926a17af04951
  ui::passwords::ceilometer_metering_secret: fc1472f8198855323b54983ff54e0715
  ui::passwords::ceilometer_user: 56721998089505353c91f2b49b4e3606
  ui::passwords::cinder_db: fd41002c295851946328d88d5b13c0e4
  ui::passwords::cinder_user: 378f357141023ee285b6b694908023fb
  ui::passwords::glance_db: 186d6be13cee1d1071532231e6e0465a
  ui::passwords::glance_user: 2969a4282da6eb17c197a9c0e797cd8a
  ui::passwords::heat_auth_encrypt_key: 7274788719c9f7691ed6ce2f15471ec0
  ui::passwords::heat_cfn_user: 2d8b3035ead04eb96605ade636dcbef4
  ui::passwords::heat_db: 294d63a0c987d0bf1be39732e031e225
  ui::passwords::heat_user: 1b8031544dac52cbf2ae1127f90b7c3c
  ui::passwords::horizon_secret_key: b0ed64a01fda82be60af464020b91c88
  ui::passwords::keystone_admin_token: a3d2cb82bfc379dfc8d8e065ea69a3ef
  ui::passwords::keystone_db: 8907248f3dd159a1f1b9c15bb3a43aeb
  ui::passwords::keystone_user: 78a8485c3eabf6f5288ee3a8ad4a1b22
  ui::passwords::mode: random
  ui::passwords::mysql_root: bbc0abc0f4f8f89a23863cab07a7523f
  ui::passwords::neutron_db: cfe607fea9f93102734429268b8f93ad
  ui::passwords::neutron_metadata_proxy_secret: 6d3ceaea13c9764048872be9138f4f8d
  ui::passwords::neutron_user: 4831577fc04b349cd5410b049a77ebc2
  ui::passwords::nova_db: eb84f464d1186452188c250931688fb2
  ui::passwords::nova_user: e19acd87d70d5f7c34a2342832f181ef
  ui::passwords::swift_shared_secret: b7229fb9348cbf363dcfeb8e3a12eb80
  ui::passwords::swift_user: 6b86e8309225fc876b547d063b52a05d
environment: production

Comment 2 Alexander Chuzhoy 2015-01-07 19:37:12 UTC
yaml of the second compute - no errors:
---
classes:
  foreman::plugin::staypuft_client:
    staypuft_ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABAQCsN0JOzz/xcBD+aMbnrvhQMmUF6KqtfDc8qikh9xQv9yDx6NJpsM48kPWC7mZRurtqWYLI+oKJNP3qw+GfeMIUODNx4kGT0RmFMYJkGUlGu4Ljtj/bn+nT8Vmr22Ny9TudOp9YjFnLZ0sB2Q7f+Zk+Gxhq6v6UJVnmEyOHzfsuuUmigU6eGimxN1r3VqqIpYvXIC8zclsUFNCu7/4RiOVx9Ybqi3+kFA8XDCjSlsRpeRasU/D3RaI0S4O8EBCsqwLFaqlTbVjCfP6CRFbxLkEufagH6pO60QxOYMB5M1FJwg2UOwAbQHcStLfMc4rQu1IuKwwhxuH8nWC+FV2flYdV
  foreman::puppet::agent::service:
    runmode: service
  quickstack::nova_network::compute:
    admin_password: e9531aa90c53700ceffed893965d75df
    amqp_host: 192.168.0.36
    amqp_password: 2d4fdfc1fd4d63e6431926a17af04951
    amqp_port: '5672'
    amqp_provider: rabbitmq
    amqp_ssl_port: '5671'
    amqp_username: openstack
    auth_host: 192.168.0.26
    auto_assign_floating_ip: 'true'
    ceilometer: true
    ceilometer_metering_secret: fc1472f8198855323b54983ff54e0715
    ceilometer_user_password: 56721998089505353c91f2b49b4e3606
    ceph_cluster_network: 192.168.0.0/24
    ceph_fsid: efcc1d30-d0af-4e3a-9414-1df617109f36
    ceph_images_key: AQCqZ61UGCvFCxAATBQWUl3d+Rngi2vNss6WyA==
    ceph_mon_host:
    - 192.168.0.8
    - 192.168.0.9
    - 192.168.0.7
    ceph_mon_initial_members:
    - maca25400702876
    - maca25400702877
    - maca25400702875
    ceph_osd_journal_size: ''
    ceph_osd_pool_default_size: ''
    ceph_public_network: 192.168.0.0/24
    ceph_volumes_key: AQCqZ61UKCFlCxAAGEiqNf6bJIF72qDRbaCixQ==
    cinder_backend_gluster: false
    cinder_backend_nfs: 'true'
    cinder_backend_rbd: 'false'
    glance_backend_rbd: 'false'
    glance_host: 192.168.0.15
    libvirt_images_rbd_ceph_conf: /etc/ceph/ceph.conf
    libvirt_images_rbd_pool: volumes
    libvirt_images_type: rbd
    libvirt_inject_key: 'false'
    libvirt_inject_password: 'false'
    mysql_ca: /etc/ipa/ca.crt
    mysql_host: 192.168.0.13
    network_create_networks: true
    network_device_mtu: ''
    network_fixed_range: 192.168.32.0/24
    network_floating_range: 10.8.30.90/30
    network_manager: VlanManager
    network_network_size: '32'
    network_num_networks: '6'
    network_overrides:
      force_dhcp_release: false
      vlan_start: '10'
    network_private_iface: ens7
    network_private_network: ''
    network_public_iface: ens8
    network_public_network: ''
    nova_db_password: eb84f464d1186452188c250931688fb2
    nova_host: 192.168.0.35
    nova_multi_host: 'true'
    nova_user_password: e19acd87d70d5f7c34a2342832f181ef
    private_iface: ''
    private_ip: 192.168.0.11
    private_network: ''
    rbd_secret_uuid: ba6d33a4-199c-470e-ba1c-b8bce5f9aa9b
    rbd_user: volumes
    ssl: 'false'
    verbose: 'true'
parameters:
  puppetmaster: staypuft.example.com
  domainname: Default domain used for provisioning
  hostgroup: base_RedHat_7/nova/Compute (Nova)
  root_pw: $1$QsrRj7I0$EZHInySd4P/R2BylYhyMT0
  puppet_ca: staypuft.example.com
  foreman_env: production
  owner_name: Admin User
  owner_email: root
  ip: 192.168.0.11
  mac: a2:54:00:86:80:97
  ntp-server: clock.redhat.com
  staypuft_ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABAQCsN0JOzz/xcBD+aMbnrvhQMmUF6KqtfDc8qikh9xQv9yDx6NJpsM48kPWC7mZRurtqWYLI+oKJNP3qw+GfeMIUODNx4kGT0RmFMYJkGUlGu4Ljtj/bn+nT8Vmr22Ny9TudOp9YjFnLZ0sB2Q7f+Zk+Gxhq6v6UJVnmEyOHzfsuuUmigU6eGimxN1r3VqqIpYvXIC8zclsUFNCu7/4RiOVx9Ybqi3+kFA8XDCjSlsRpeRasU/D3RaI0S4O8EBCsqwLFaqlTbVjCfP6CRFbxLkEufagH6pO60QxOYMB5M1FJwg2UOwAbQHcStLfMc4rQu1IuKwwhxuH8nWC+FV2flYdV
  time-zone: America/New_York
  ui::ceph::fsid: efcc1d30-d0af-4e3a-9414-1df617109f36
  ui::ceph::images_key: AQCqZ61UGCvFCxAATBQWUl3d+Rngi2vNss6WyA==
  ui::ceph::volumes_key: AQCqZ61UKCFlCxAAGEiqNf6bJIF72qDRbaCixQ==
  ui::cinder::backend_ceph: 'false'
  ui::cinder::backend_eqlx: 'false'
  ui::cinder::backend_lvm: 'false'
  ui::cinder::backend_nfs: 'true'
  ui::cinder::nfs_uri: 192.168.0.1:/cinder
  ui::cinder::rbd_secret_uuid: ba6d33a4-199c-470e-ba1c-b8bce5f9aa9b
  ui::deployment::amqp_provider: rabbitmq
  ui::deployment::networking: nova
  ui::deployment::platform: rhel7
  ui::glance::driver_backend: nfs
  ui::glance::nfs_network_path: 192.168.0.1:/glance
  ui::neutron::core_plugin: ml2
  ui::neutron::ml2_cisco_nexus: 'false'
  ui::neutron::ml2_l2population: 'true'
  ui::neutron::ml2_openvswitch: 'true'
  ui::neutron::network_segmentation: vxlan
  ui::nova::network_manager: VlanManager
  ui::nova::private_fixed_range: 192.168.32.0/24
  ui::nova::public_floating_range: 10.8.30.90/30
  ui::nova::vlan_range: '10:15'
  ui::passwords::admin: e9531aa90c53700ceffed893965d75df
  ui::passwords::amqp: 2d4fdfc1fd4d63e6431926a17af04951
  ui::passwords::ceilometer_metering_secret: fc1472f8198855323b54983ff54e0715
  ui::passwords::ceilometer_user: 56721998089505353c91f2b49b4e3606
  ui::passwords::cinder_db: fd41002c295851946328d88d5b13c0e4
  ui::passwords::cinder_user: 378f357141023ee285b6b694908023fb
  ui::passwords::glance_db: 186d6be13cee1d1071532231e6e0465a
  ui::passwords::glance_user: 2969a4282da6eb17c197a9c0e797cd8a
  ui::passwords::heat_auth_encrypt_key: 7274788719c9f7691ed6ce2f15471ec0
  ui::passwords::heat_cfn_user: 2d8b3035ead04eb96605ade636dcbef4
  ui::passwords::heat_db: 294d63a0c987d0bf1be39732e031e225
  ui::passwords::heat_user: 1b8031544dac52cbf2ae1127f90b7c3c
  ui::passwords::horizon_secret_key: b0ed64a01fda82be60af464020b91c88
  ui::passwords::keystone_admin_token: a3d2cb82bfc379dfc8d8e065ea69a3ef
  ui::passwords::keystone_db: 8907248f3dd159a1f1b9c15bb3a43aeb
  ui::passwords::keystone_user: 78a8485c3eabf6f5288ee3a8ad4a1b22
  ui::passwords::mode: random
  ui::passwords::mysql_root: bbc0abc0f4f8f89a23863cab07a7523f
  ui::passwords::neutron_db: cfe607fea9f93102734429268b8f93ad
  ui::passwords::neutron_metadata_proxy_secret: 6d3ceaea13c9764048872be9138f4f8d
  ui::passwords::neutron_user: 4831577fc04b349cd5410b049a77ebc2
  ui::passwords::nova_db: eb84f464d1186452188c250931688fb2
  ui::passwords::nova_user: e19acd87d70d5f7c34a2342832f181ef
  ui::passwords::swift_shared_secret: b7229fb9348cbf363dcfeb8e3a12eb80
  ui::passwords::swift_user: 6b86e8309225fc876b547d063b52a05d
environment: production

Comment 3 Alexander Chuzhoy 2015-01-07 19:42:25 UTC
Created attachment 977535 [details]
logs - controller3

Comment 4 Alexander Chuzhoy 2015-01-07 19:45:07 UTC
Created attachment 977536 [details]
logs - controller2

Comment 5 Alexander Chuzhoy 2015-01-07 19:47:43 UTC
Created attachment 977537 [details]
logs - controller1

Comment 6 Alexander Chuzhoy 2015-01-07 19:49:06 UTC
Created attachment 977538 [details]
foreman logs

Comment 7 Alexander Chuzhoy 2015-01-07 19:50:19 UTC
Created attachment 977539 [details]
logs from compute1

Comment 8 Alexander Chuzhoy 2015-01-07 19:52:08 UTC
Created attachment 977540 [details]
logs from compute2

Comment 10 Mike Burns 2015-01-08 19:07:14 UTC
Jirka, 

I think this is related to the orchestration changes.  We need to run 1 compute by itself first, then the rest of the computes in parallel.  Did that get lost in the migration?

Comment 11 Omri Hochman 2015-01-08 19:37:21 UTC
looks like in case there is more than one compute in nova-network deployment,
the puppet that runs on the second compute, attempts to create an already exist network - action which will eventually produce this Error on the second compute /var/log/nova/nova-manage.log :


 2015-01-07 17:12:51.545 12483 CRITICAL nova [req-178c9188-1c9f-4925-8a25-8687036822b4 None] CidrConflict: Requested cidr (192.168.32.0/24) conflicts with existing cidr (192.168.32.0/24)
2015-01-07 17:12:51.545 12483 TRACE nova Traceback (most recent call last):
2015-01-07 17:12:51.545 12483 TRACE nova   File "/usr/bin/nova-manage", line 10, in <module>
2015-01-07 17:12:51.545 12483 TRACE nova     sys.exit(main())
2015-01-07 17:12:51.545 12483 TRACE nova   File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1401, in main
2015-01-07 17:12:51.545 12483 TRACE nova     ret = fn(*fn_args, **fn_kwargs)
2015-01-07 17:12:51.545 12483 TRACE nova   File "<string>", line 2, in create
2015-01-07 17:12:51.545 12483 TRACE nova   File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 497, in validate_network_plugin
2015-01-07 17:12:51.545 12483 TRACE nova     return f(*args, **kwargs)
2015-01-07 17:12:51.545 12483 TRACE nova   File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 545, in create
2015-01-07 17:12:51.545 12483 TRACE nova     net_manager.create_networks(context.get_admin_context(), **kwargs)
2015-01-07 17:12:51.545 12483 TRACE nova   File "/usr/lib/python2.7/site-packages/nova/network/manager.py", line 1188, in create_networks
2015-01-07 17:12:51.545 12483 TRACE nova     return self._do_create_networks(context, **kwargs)
2015-01-07 17:12:51.545 12483 TRACE nova   File "/usr/lib/python2.7/site-packages/nova/network/manager.py", line 1263, in _do_create_networks
2015-01-07 17:12:51.545 12483 TRACE nova     other=subnet)
2015-01-07 17:12:51.545 12483 TRACE nova CidrConflict: Requested cidr (192.168.32.0/24) conflicts with existing cidr (192.168.32.0/24)
2015-01-07 17:12:51.545 12483 TRACE nova

Comment 12 Jiri Stransky 2015-01-09 11:19:40 UTC
You're all correct :) If we deployed one compute node first and then the rest, then this did get lost in transition to puppetssh. The error is visible in the the compute1 logs that Sasha posted. It's most probably a race condition when creating this network:

https://github.com/stackforge/puppet-nova/blob/9fdcb01c96cc085f8a57a855bd70cc8129699a6f/manifests/network.pp#L105-L110

The resource provider checks for existence of the network but it still can happen that two nodes attempt to create the same network:

https://github.com/stackforge/puppet-nova/blob/9fdcb01c96cc085f8a57a855bd70cc8129699a6f/lib/puppet/provider/nova_network/nova_manage.rb

Fortunately the error is caught by our approach of running puppet three times. It only occurs on the first run, the other two runs succeed. It would still be nice if we could deploy just one compute node first and then the rest, and get rid of the error on first run. I'll investigate what can be done about it.

Comment 13 Jiri Stransky 2015-01-09 17:33:16 UTC
Pull request upstream:

https://github.com/theforeman/staypuft/pull/400

Comment 16 Alexander Chuzhoy 2015-01-16 00:06:22 UTC
Verified:
Environment:
rhel-osp-installer-client-0.5.5-1.el7ost.noarch
ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el7ost.noarch
openstack-puppet-modules-2014.2.8-1.el7ost.noarch
openstack-foreman-installer-3.0.10-2.el7ost.noarch
rhel-osp-installer-0.5.5-1.el7ost.noarch
ruby193-rubygem-staypuft-0.5.12-1.el7ost.noarch

The reported issue doesn't reproduce.

Comment 18 errata-xmlrpc 2015-02-09 15:19:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-0156.html


Note You need to log in before you can comment on or make changes to this bug.