Bug 1127736 - Failed to create instant due to "NoFloatingIpInterface: Interface eth0 not found."
Summary: Failed to create instant due to "NoFloatingIpInterface: Interface eth0 not fo...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-foreman-installer
Version: 5.0 (RHEL 7)
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ga
: Installer
Assignee: Lars Kellogg-Stedman
QA Contact: nlevinki
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-08-07 12:51 UTC by nlevinki
Modified: 2014-08-21 18:08 UTC (History)
10 users (show)

Fixed In Version: openstack-foreman-installer-2.0.18-1.el6ost
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-08-21 18:08:24 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Logs nova and yaml files (955.59 KB, application/x-gzip)
2014-08-17 11:41 UTC, Tzach Shefi
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:1090 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform Enhancement Advisory 2014-08-22 15:28:08 UTC

Description nlevinki 2014-08-07 12:51:14 UTC
Description of problem:
I installed openstack no-HA Nova network using staypuf installer, I configured in staypuf ens7 and ens8 interfaces and not eth...
When I try to create an instant it failed du"NoFloatingIpInterface: Interface eth0 not found". 
I checked nova.conf and the line is marked ->  #public_interface=eth0.
Which mean we are using default config and not what I configured in staypuf. 

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1.Install staypuf
2.create non-ha nova network deployment
3.configure ens7 and ens8 for the interfaces

Actual results:


Expected results:


Additional info:

Comment 1 nlevinki 2014-08-07 12:54:46 UTC
Log from nova-scheduler
2014-08-07 11:44:47.300 1246 TRACE nova.compute.manager [instance: 6a1be46b-3fee-4e05-a30d-cd786dd033ad]   File "/usr/lib/python2.7/site-packages/nova/network/floating_ips.py", line 389, in do_associate
2014-08-07 11:44:47.300 1246 TRACE nova.compute.manager [instance: 6a1be46b-3fee-4e05-a30d-cd786dd033ad]     interface=interface)
2014-08-07 11:44:47.300 1246 TRACE nova.compute.manager [instance: 6a1be46b-3fee-4e05-a30d-cd786dd033ad]
2014-08-07 11:44:47.300 1246 TRACE nova.compute.manager [instance: 6a1be46b-3fee-4e05-a30d-cd786dd033ad] NoFloatingIpInterface: Interface eth0 not found.
2014-08-07 11:44:47.300 1246 TRACE nova.compute.manager [instance: 6a1be46b-3fee-4e05-a30d-cd786dd033ad]
2014-08-07 11:44:47.300 1246 TRACE nova.compute.manager [instance: 6a1be46b-3fee-4e05-a30d-cd786dd033ad]
2014-08-07 11:45:07.240 1246 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2014-08-07 11:45:07.400 1246 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 3185
2014-08-07 11:45:07.401 1246 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 21
2014-08-07 11:45:07.401 1246 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 2
2014-08-07 11:45:07.439 1246 INFO nova.compute.resource_tracker [-] Compute_service record updated for maca25400868097.example.com:maca25400868097.example.com

Comment 6 Lars Kellogg-Stedman 2014-08-07 17:54:34 UTC
I think I can reproduce this problem.  I have a deployed a non-HA nova-network environment with one controller and one compute node.  The compute node is mac5254003651fc.localdomain.

I have in /var/lib/puppet/yaml/node/mac5254003651fc.localdomain.yaml :

      network_public_iface: eth2

I've added the following to nova::network::

  notify { "network_manager = $network_manager": }
  notify { "public_interface = $public_interface": }

And running 'puppet agent -vt' on the compute host, I see:

Notice: network_manager = nova.network.manager.FlatDHCPManager
Notice: public_interface = 

nova:network is instantiated by quickstack::nova_network::compute as:

  class { '::nova::network':
    private_interface => "$priv_nic",
    public_interface  => "$pub_nic",
    fixed_range       => "$network_fixed_range",
    num_networks      => $network_num_networks,
    network_size      => $network_network_size,
    floating_range    => "$network_floating_range",
    enabled           => true,
    network_manager   => "nova.network.manager.$network_manager",
    config_overrides  => $network_overrides,
    create_networks   => $network_create_networks,
    install_service   => true,
  }

Where $pub_nic is:

  $pub_nic = find_nic("$network_public_network","$network_public_iface","")

If I add the following after that line in the manifest:

  notify {" in quickstack::nova_network::compute, network_public_iface = $network_public_iface":}
  notify {" in quickstack::nova_network::compute, pub_nic = $pub_nic":}

I get:

Notice:  in quickstack::nova_network::compute, network_public_iface = eth2
Notice:  in quickstack::nova_network::compute, pub_nic = 

So it looks like the find_nic() lookup is failing.  I'm going to take a look at that function and see what's going on.

Comment 7 Lars Kellogg-Stedman 2014-08-07 18:07:29 UTC
So, quickstack::nova_network::compute defaults:

  $network_public_network       = '192.168.201.0',

And calls find_nic() like this:

    $pub_nic = find_nic("$network_public_network","$network_public_iface","")

The find_nic() function looks like this:

    if (the_network != '')
      function_get_nic_from_network([the_network])
    elsif (the_ip != '')
      function_get_nic_from_ip([the_ip])
    else
      the_nic
    end

So if then_network is defined, we use function_get_nic_from_network(), which will iterate over all the network_<ifname> facts to find the appropraite interface.

Unfortunately, looking at staypuft/app/lib/staypuft/seeder.rb, staypuft never passes $network_public_network into quickstack::nova_network::compute, so we're stuck with the default.

So if no interfaces on your system are using 192.168.201.0/24, this will never find an interface.

Comment 8 Lars Kellogg-Stedman 2014-08-07 18:30:08 UTC
It looks as if network_public_network is only ever used in the call to find_nic(), so if we unset it, find_nic() will use network_public_iface instead, which *is* set.

I am testing that out right now.

Comment 9 Lars Kellogg-Stedman 2014-08-07 20:40:24 UTC
Fix proposed upstream:

https://github.com/redhat-openstack/astapor/pull/343

Comment 10 Lars Kellogg-Stedman 2014-08-07 22:28:19 UTC
I've confirmed that the referenced pull request resolves this problem.  

Without this patch in place, /etc/nova/nova.conf on my compute node looked like this:

  #public_interface=eth0

After applying this change and re-deploying, /etc/nova/nova.conf contains:


  public_interface=eth2

Which is correct.

Comment 12 nlevinki 2014-08-13 13:02:34 UTC
Hi,
tried to create a vm with latest staypuf build
openstack-nova-conductor-2014.1.1-4.el7ost.noarch
openstack-nova-novncproxy-2014.1.1-4.el7ost.noarch
python-novaclient-2.17.0-2.el7ost.noarch
openstack-nova-common-2014.1.1-4.el7ost.noarch
openstack-nova-console-2014.1.1-4.el7ost.noarch
openstack-nova-cert-2014.1.1-4.el7ost.noarch
python-nova-2014.1.1-4.el7ost.noarch
openstack-nova-scheduler-2014.1.1-4.el7ost.noarch
openstack-nova-api-2014.1.1-4.el7ost.noarch

it failed
Unexpected error while running command.\\nCommand: sudo nova-rootwrap /etc/nova/rootwrap.conf ip addr del 192.168.100.254/24 brd 192.168.100.255 scope global secondary dev br100\\nExit code: 255\\nStdout: \\\'\\\'\\nStderr: \\\'Error: either "local" is duplicate, or "secondary" is a garbage

got this error
2014-08-13 11:45:08.106 7470 INFO nova.scheduler.filter_scheduler [req-bcc41af1-b193-4580-b9fe-eeb96c828d95 6f9d4879389b4122a873a3a995942da4 e067a2bdf0064dbf867840e17eba585c] Attempting to build 1 instance(s) uuids: [u'9d5ff3be-1f63-4dcb-a8e1-4dfd4c1b2ab2']                                   
2014-08-13 11:45:08.107 7470 ERROR nova.scheduler.filter_scheduler [req-bcc41af1-b193-4580-b9fe-eeb96c828d95 6f9d4879389b4122a873a3a995942da4 e067a2bdf0064dbf867840e17eba585c] [instance: 9d5ff3be-1f63-4dcb-a8e1-4dfd4c1b2ab2] Error from last host: maca25400868096.example.com (node maca25400868096.example.com): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1305, in _build_instance\n    set_access_ip=set_access_ip)\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 393, in decorated_function\n    return function(self, context, *args, **kwargs)\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1717, in _spawn\n    LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n', u'  File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__\n    six.reraise(self.type_, self.value, self.tb)\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1714, in _spawn\n    block_device_info)\n', u'  File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2282, in spawn\n    admin_pass=admin_password)\n', u'  File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2733, in _create_image\n    instance, network_info, admin_pass, files, suffix)\n', u'  File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2551, in _inject_data\n    net = netutils.get_injected_network_template(network_info)\n', u'  File "/usr/lib/python2.7/site-packages/nova/virt/netutils.py", line 71, in get_injected_network_template\n    if not (network_info and template):\n', u'  File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 420, in __len__\n    return self._sync_wrapper(fn, *args, **kwargs)\n', u'  File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 407, in _sync_wrapper\n    self.wait()\n', u'  File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 439, in wait\n    self[:] = self._gt.wait()\n', u'  File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 168, in wait\n    return self._exit_event.wait()\n', u'  File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 120, in wait\n    current.throw(*self._exc)\n', u'  File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 194, in main\n    result = function(*args, **kwargs)\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1504, in _allocate_network_async\n    dhcp_options=dhcp_options)\n', u'  File "/usr/lib/python2.7/site-packages/nova/network/api.py", line 95, in wrapped\n    return func(self, context, *args, **kwargs)\n', u'  File "/usr/lib/python2.7/site-packages/nova/network/api.py", line 49, in wrapper\n    res = f(self, context, *args, **kwargs)\n', u'  File "/usr/lib/python2.7/site-packages/nova/network/api.py", line 303, in allocate_for_instance\n    nw_info = self.network_rpcapi.allocate_for_instance(context, **args)\n', u'  File "/usr/lib/python2.7/site-packages/nova/network/rpcapi.py", line 170, in allocate_for_instance\n    macs=jsonutils.to_primitive(macs))\n', u'  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line 150, in call\n    wait_for_reply=True, timeout=timeout)\n', u'  File "/usr/lib/python2.7/site-packages/oslo/messaging/transport.py", line 90, in _send\n    timeout=timeout)\n', u'  File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 412, in send\n    return self._send(target, ctxt, message, wait_for_reply, timeout)\n', u'  File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 405, in _send\n    raise result\n', u'RemoteError: Remote error: ProcessExecutionError Unexpected error while running command.\nCommand: sudo nova-rootwrap /etc/nova/rootwrap.conf ip addr del 192.168.100.254/24 brd 192.168.100.255 scope global secondary dev br100\nExit code: 255\nStdout: \'\'\nStderr: \'Error: either "local" is duplicate, or "secondary" is a garbage.\\n\'\n[u\'Traceback (most recent call last):\\n\', u\'  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply\\n    incoming.message))\\n\', u\'  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch\\n    return self._do_dispatch(endpoint, method, ctxt, args)\\n\', u\'  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, in _do_dispatch\\n    result = getattr(endpoint, method)(ctxt, **new_args)\\n\', u\'  File "/usr/lib/python2.7/site-packages/nova/network/floating_ips.py", line 119, in allocate_for_instance\\n    **kwargs)\\n\', u\'  File "/usr/lib/python2.7/site-packages/nova/network/manager.py", line 515, in allocate_for_instance\\n    requested_networks=requested_networks)\\n\', u\'  File "/usr/lib/python2.7/site-packages/nova/network/manager.py", line 216, in _allocate_fixed_ips\\n    vpn=vpn, address=address)\\n\', u\'  File "/usr/lib/python2.7/site-packages/nova/network/manager.py", line 899, in allocate_fixed_ip\\n    quotas.rollback(context)\\n\', u\'  File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__\\n    six.reraise(self.type_, self.value, self.tb)\\n\', u\'  File "/usr/lib/python2.7/site-packages/nova/network/manager.py", line 892, in allocate_fixed_ip\\n    self._setup_network_on_host(context, network)\\n\', u\'  File "/usr/lib/python2.7/site-packages/nova/network/manager.py", line 1660, in _setup_network_on_host\\n    self.l3driver.initialize_gateway(network)\\n\', u\'  File "/usr/lib/python2.7/site-packages/nova/network/l3.py", line 105, in initialize_gateway\\n    linux_net.initialize_gateway_device(dev, network_ref)\\n\', u\'  File "/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", line 249, in inner\\n    return f(*args, **kwargs)\\n\', u\'  File "/usr/lib/python2.7/site-packages/nova/network/linux_net.py", line 860, in initialize_gateway_device\\n    run_as_root=True, check_exit_code=[0, 2, 254])\\n\', u\'  File "/usr/lib/python2.7/site-packages/nova/network/linux_net.py", line 1205, in _execute\\n    return utils.execute(*cmd, **kwargs)\\n\', u\'  File "/usr/lib/python2.7/site-packages/nova/utils.py", line 165, in execute\\n    return processutils.execute(*cmd, **kwargs)\\n\', u\'  File "/usr/lib/python2.7/site-packages/nova/openstack/common/processutils.py", line 193, in execute\\n    cmd=\\\' \\\'.join(cmd))\\n\', u\'ProcessExecutionError: Unexpected error while running command.\\nCommand: sudo nova-rootwrap /etc/nova/rootwrap.conf ip addr del 192.168.100.254/24 brd 192.168.100.255 scope global secondary dev br100\\nExit code: 255\\nStdout: \\\'\\\'\\nStderr: \\\'Error: either "local" is duplicate, or "secondary" is a garbage.\\\\n\\\'\\n\'].\n']
2014-08-13 11:45:08.128 7470 INFO nova.filters [req-bcc41af1-b193-4580-b9fe-eeb96c828d95 6f9d4879389b4122a873a3a995942da4 e067a2bdf0064dbf867840e17eba585c] Filter RetryFilter returned 0 hosts
2014-08-13 11:45:08.128 7470 WARNING nova.scheduler.driver [req-bcc41af1-b193-4580-b9fe-eeb96c828d95 6f9d4879389b4122a873a3a995942da4 e067a2bdf0064dbf867840e17eba585c] [instance: 9d5ff3be-1f63-4dcb-a8e1-4dfd4c1b2ab2] Setting instance to ERROR state.
[root@maca2

Comment 13 Lars Kellogg-Stedman 2014-08-13 14:37:58 UTC
It's not clear to me that this backtrace is related to the changes discussed in this issue.  Would you mind opening a new issue with this information?  Thanks.

Comment 14 Tzach Shefi 2014-08-17 11:41:20 UTC
Failed, instances still stuck in status "build"
1. Staypuft deployment: Nova none HA flat network + 2 compute hosts. 
2. Uploaded image
3. booted instance from Cirros image.
4. Stuck in "Build" state

Version:
rhel-osp-installer-0.1.10-2.el6ost.noarch
foreman-installer-1.5.0-0.6.RC2.el6ost.noarch
openstack-foreman-installer-2.0.21-1.el6ost.noarc

[root@staypuft ~]# grep -ir network_public_iface  /var/lib/puppet/yaml/node/
/var/lib/puppet/yaml/node/maca25400868096.example.com.yaml:      network_public_iface: ens8
/var/lib/puppet/yaml/node/maca25400868097.example.com.yaml:      network_public_iface: ens8

Added yaml files plus compute logs2.tar.gz.

Comment 15 Tzach Shefi 2014-08-17 11:41:48 UTC
Created attachment 927452 [details]
Logs nova and yaml files

Comment 16 Jason Guiditta 2014-08-18 14:36:43 UTC
From compute-controller2 log, I see:

2014-08-17 10:21:16.890 11692 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server on 192.168.0.5:5672 is unreachable: timed out. Trying again in 1 seconds.


This is after connecting successfully earlier:

2014-08-17 09:22:36.678 11692 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on 192.168.0.5:5672

All other errors in this log happen after this.


I see the same sequence on compute number 1.

The scheduler log from controller shows the later failed attempts to create VM, so it appears to me to all stem from some issue with amqp.  This is a different bug, imo, and should be tracked separately.  This one should be confirmed, ad it would not have gotten this far had the bug been unfixed.  To verify this bug, please check that nova.conf does indeed have the expected public_interface setting.  For the new bug, please check if amqp is still running when you see the reported connection errors.

Comment 17 Alexander Chuzhoy 2014-08-18 18:11:55 UTC
While I don't see the NICs I configured listed in /etc/nova/nova.conf, I was able to create an instance with no issues.

Comment 18 Alexander Chuzhoy 2014-08-18 18:12:31 UTC
Comment #17:
openstack-foreman-installer-2.0.21-1.el6ost.noarch
ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el6ost.noarch
openstack-puppet-modules-2014.1-20.2.el6ost.noarch

Comment 19 Alexander Chuzhoy 2014-08-18 21:46:49 UTC
Verified: rhel-osp-installer-0.1.10-2.el6ost.noarch
The NIC names are supposed to be in the nova.conf file on the compute nodes only. Verified they exist.
[root@maca25400868096 ~]# grep -e ens7 -e ens8 /etc/nova/nova.conf
public_interface=ens8
flat_interface=ens7

Comment 20 errata-xmlrpc 2014-08-21 18:08:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1090.html


Note You need to log in before you can comment on or make changes to this bug.