Bug 1061637 - Unable to boot nova instances. PortLimitExceeded: Maximum number of ports exceed
Summary: Unable to boot nova instances. PortLimitExceeded: Maximum number of ports exceed
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-neutron
Version: 4.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 5.0 (RHEL 7)
Assignee: RHOS Maint
QA Contact: Ofer Blaut
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-02-05 09:44 UTC by shilpa
Modified: 2023-09-14 02:03 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-05-28 11:48:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Compute logs (1.48 MB, text/plain)
2014-02-12 18:18 UTC, shilpa
no flags Details
Neutron logs (820.06 KB, text/plain)
2014-02-12 18:19 UTC, shilpa
no flags Details

Description shilpa 2014-02-05 09:44:50 UTC
Description of problem:

New instances fail to boot on a previously working setup on RHEL6.5.


Version-Release number of selected component (if applicable):

openstack-nova-2013.2.1-2.el6ost.noarch


How reproducible: Consistent


Steps to Reproduce:

1. Configure openstack nova. 
2. Create an image.
3. Boot a nova instance using the image:

# nova boot --flavor 2 --image a2999e8f-b8f8-47d3-af21-88cb953f9b6e instance2

#nova list

| b62faf56-94f1-4145-ac3d-fd9541d0d462 | instance2     | BUILD  | spawning   | NOSTATE     |                     |

Errors out after some time:

| b62faf56-94f1-4145-ac3d-fd9541d0d462 | instance2     | ERROR  | None       | NOSTATE     |                     |


Previous to this issue, while booting instances, I came across another issue where instances failed to boot with this error:

TRACE nova.compute.manager [instance: 787f6b8d-2851-41d2-9a2d-d7b5701c546f] Stderr: '2014-01-07T15:36:50Z|00002|reconnect|WARN|unix:/var/run/openvswitch/db.sock: connection attempt failed (Connection refused)\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Connection refused)\n'

To workaround this failure, I did the following:

1. rm -rf /var/run/openvswitch/db.sock
2. service openvswitch restart

After applying the above workaround the instance creation was successful again:


But now I see that creation of instances fail with the following errors in compute logs:

2014-02-05 14:37:29.347 19045 ERROR nova.network.neutronv2.api [-] [instance: 4bc922f5-079a-442c-ae69-1099d2bc6103] Neutron error creating port on network 2290d22d-c043
-4c84-90b5-2c3415fe6a68
2014-02-05 14:37:29.347 19045 TRACE nova.network.neutronv2.api [instance: 4bc922f5-079a-442c-ae69-1099d2bc6103] Traceback (most recent call last):
2014-02-05 14:37:29.347 19045 TRACE nova.network.neutronv2.api [instance: 4bc922f5-079a-442c-ae69-1099d2bc6103]   File "/usr/lib/python2.6/site-packages/nova/network/ne
utronv2/api.py", line 182, in _create_port
2014-02-05 14:37:29.347 19045 TRACE nova.network.neutronv2.api [instance: 4bc922f5-079a-442c-ae69-1099d2bc6103]     port_id = port_client.create_port(port_req_body)['po
rt']['id']
2014-02-05 14:37:29.347 19045 TRACE nova.network.neutronv2.api [instance: 4bc922f5-079a-442c-ae69-1099d2bc6103]   File "/usr/lib/python2.6/site-packages/neutronclient/v
2_0/client.py", line 108, in with_params
2014-02-05 14:37:29.347 19045 TRACE nova.network.neutronv2.api [instance: 4bc922f5-079a-442c-ae69-1099d2bc6103]     ret = self.function(instance, *args, **kwargs)
2014-02-05 14:37:29.347 19045 TRACE nova.network.neutronv2.api [instance: 4bc922f5-079a-442c-ae69-1099d2bc6103]   File "/usr/lib/python2.6/site-packages/neutronclient/v
2_0/client.py", line 308, in create_port
2014-02-05 14:37:29.347 19045 TRACE nova.network.neutronv2.api [instance: 4bc922f5-079a-442c-ae69-1099d2bc6103]     return self.post(self.ports_path, body=body)
2014-02-05 14:37:29.347 19045 TRACE nova.network.neutronv2.api [instance: 4bc922f5-079a-442c-ae69-1099d2bc6103]   File "/usr/lib/python2.6/site-packages/neutronclient/v
2_0/client.py", line 1188, in post
2014-02-05 14:37:29.347 19045 TRACE nova.network.neutronv2.api [instance: 4bc922f5-079a-442c-ae69-1099d2bc6103]     headers=headers, params=params)
2014-02-05 14:37:29.347 19045 TRACE nova.network.neutronv2.api [instance: 4bc922f5-079a-442c-ae69-1099d2bc6103]   File "/usr/lib/python2.6/site-packages/neutronclient/v
2_0/client.py", line 1111, in do_request
2014-02-05 14:37:29.347 19045 TRACE nova.network.neutronv2.api [instance: 4bc922f5-079a-442c-ae69-1099d2bc6103]     self._handle_fault_response(status_code, replybody)
2014-02-05 14:37:29.347 19045 TRACE nova.network.neutronv2.api [instance: 4bc922f5-079a-442c-ae69-1099d2bc6103]   File "/usr/lib/python2.6/site-packages/neutronclient/v
2_0/client.py", line 1081, in _handle_fault_response
2014-02-05 14:37:29.347 19045 TRACE nova.network.neutronv2.api [instance: 4bc922f5-079a-442c-ae69-1099d2bc6103]     exception_handler_v20(status_code, des_error_body)
2014-02-05 14:37:29.347 19045 TRACE nova.network.neutronv2.api [instance: 4bc922f5-079a-442c-ae69-1099d2bc6103]   File "/usr/lib/python2.6/site-packages/neutronclient/v
2_0/client.py", line 93, in exception_handler_v20
2014-02-05 14:37:29.347 19045 TRACE nova.network.neutronv2.api [instance: 4bc922f5-079a-442c-ae69-1099d2bc6103]     message=msg)
2014-02-05 14:37:29.347 19045 TRACE nova.network.neutronv2.api [instance: 4bc922f5-079a-442c-ae69-1099d2bc6103] NeutronClientException: 409-{u'NeutronError': {u'message
': u'No more IP addresses available on network 2290d22d-c043-4c84-90b5-2c3415fe6a68.', u'type': u'IpAddressGenerationFailure', u'detail': u''}}
2014-02-05 14:37:29.347 19045 TRACE nova.network.neutronv2.api [instance: 4bc922f5-079a-442c-ae69-1099d2bc6103] 
2014-02-05 14:37:29.350 19045 ERROR nova.compute.manager [-] Instance failed network setup after 1 attempt(s)
2014-02-05 14:37:29.350 19045 TRACE nova.compute.manager Traceback (most recent call last):
2014-02-05 14:37:29.350 19045 TRACE nova.compute.manager   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1238, in _allocate_network_async
2014-02-05 14:37:29.350 19045 TRACE nova.compute.manager     dhcp_options=dhcp_options)
2014-02-05 14:37:29.350 19045 TRACE nova.compute.manager   File "/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", line 357, in allocate_for_instance
2014-02-05 14:37:29.350 19045 TRACE nova.compute.manager     LOG.exception(msg, port_id)
2014-02-05 14:37:29.350 19045 TRACE nova.compute.manager   File "/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", line 334, in allocate_for_instance
2014-02-05 14:37:29.350 19045 TRACE nova.compute.manager     security_group_ids, available_macs, dhcp_opts))
2014-02-05 14:37:29.350 19045 TRACE nova.compute.manager   File "/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", line 191, in _create_port
2014-02-05 14:37:29.350 19045 TRACE nova.compute.manager     raise exception.PortLimitExceeded()
2014-02-05 14:37:29.350 19045 TRACE nova.compute.manager PortLimitExceeded: Maximum number of ports exceeded.



There are totally 5 instances running:

# nova list
+--------------------------------------+---------------+--------+------------+-------------+---------------------+
| ID                                   | Name          | Status | Task State | Power State | Networks            |
+--------------------------------------+---------------+--------+------------+-------------+---------------------+
| 45cff02b-0069-4c01-9dff-80e0f76414c9 | instance1     | ACTIVE | None       | Running     | public=172.24.4.230 |
| b62faf56-94f1-4145-ac3d-fd9541d0d462 | instance2     | ERROR  | None       | NOSTATE     |                     |
| 86466aee-6098-4cc3-944f-a1bea2154aed | instance3     | ACTIVE | None       | Running     | public=172.24.4.232 |
| a47a8a53-6481-4efd-be9f-cc85ff56d110 | instance4     | ERROR  | None       | Shutdown    | public=172.24.4.235 |
| a0beaa77-8e42-4cf4-bfcc-9b2087aa6031 | instance5     | ACTIVE | None       | Running     | public=172.24.4.236 |
| 612e1275-fb8b-49e5-b224-7f47fabed5e0 | snap-instance | ACTIVE | None       | Running     | public=172.24.4.234 |
| 6e1e36c6-09c9-4c6e-8acd-6fbe2d495d75 | vol4snapvol   | ACTIVE | None       | Running     | public=172.24.4.233 |
+--------------------------------------+---------------+--------+------------+-------------+---------------------+

Comment 2 Xavier Queralt 2014-02-12 11:26:38 UTC
Could you provide the neutron logs? In this case nova is just asking neutron to create a new port, which fails because the quota has been exceeded for your tenant/project, thus raising PortLimitExceeded.

I'm not a neutron expert but I imagine that we will be able to see the exact error in the neutron logs which might be fixed by increasing the quota.

Comment 3 shilpa 2014-02-12 18:16:47 UTC
In the neutron server logs, I only see a warning and nothing else:

WARNING neutron.db.agentschedulers_db [-] Fail scheduling network {'status': u'ACTIVE', 'subnets': [u'7ea88cb2-8432-429b-b0a8-83c82de51359'], 'name': u'public', 'provider:physical_network': None, 'admin_state_up': True, 'tenant_id': u'ae458c0ee9864efca47906f9f3b7278d', 'provider:network_type': u'local', 'router:external': True, 'shared': False, 'id': u'cbd64b71-7a20-41bb-967a-db2409bffddd', 'provider:segmentation_id': None}

Attaching the both nova and neutron server logs.

As for the port limit, I deleted all the instances to make sure I don't have any ports in use. And nova still fails to create instance.

Comment 4 shilpa 2014-02-12 18:18:16 UTC
Created attachment 862484 [details]
Compute logs

Comment 5 shilpa 2014-02-12 18:19:47 UTC
Created attachment 862486 [details]
Neutron logs

Comment 6 Xavier Queralt 2014-02-13 08:35:26 UTC
Thanks for the extra logs. I'm moving this bz to neutron because from what I see nova is doing the right thing. Neutron is telling us that it couldn't create the port (either there was an error during creation or the quota was exceeded). Seeing this, nova can't start the instance and fails as it should.

I'd suggest to look into the logs for the openvswitch-agent where, I think, there might be the error/warning we're looking for.

Comment 7 Nir Yechiel 2014-05-28 11:48:48 UTC
shilpa, our QE are running similar tests and did not encounter this issue so far. I am closing this bug, please report it again if it's still relevant - but with more details on your setup and configuration and with exact steps to reproduce.

Comment 8 Red Hat Bugzilla 2023-09-14 02:03:14 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.