Bug 1017058 - openstack nova: nova-api and nova-metadata-api services are using the same port
Summary: openstack nova: nova-api and nova-metadata-api services are using the same port
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 4.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 4.0
Assignee: Xavier Queralt
QA Contact: Ami Jeain
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-10-09 08:25 UTC by Yogev Rabl
Modified: 2019-09-09 16:59 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-10-29 08:10:51 UTC
Target Upstream Version:
Embargoed:
yrabl: internal-review?


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1237334 0 None None None Never

Description Yogev Rabl 2013-10-09 08:25:23 UTC
Description of problem:
The services nova-api and the nova-metadata-api are both using the same port, 8775. 
Thus, the services are 'competing' for the port and one of them will not work. 

Version-Release number of selected component (if applicable):

Red Hat Enterprise Linux Server release 6.5 Beta (Santiago)

python-novaclient-2.15.0-1.el6ost.noarch
python-nova-2013.2-0.24.rc1.el6ost.noarch
openstack-nova-network-2013.2-0.24.rc1.el6ost.noarch
openstack-nova-common-2013.2-0.24.rc1.el6ost.noarch
openstack-nova-console-2013.2-0.24.rc1.el6ost.noarch
openstack-nova-compute-2013.2-0.24.rc1.el6ost.noarch
openstack-nova-conductor-2013.2-0.24.rc1.el6ost.noarch
openstack-nova-novncproxy-2013.2-0.24.rc1.el6ost.noarch
openstack-nova-scheduler-2013.2-0.24.rc1.el6ost.noarch
openstack-nova-api-2013.2-0.24.rc1.el6ost.noarch
openstack-nova-cert-2013.2-0.24.rc1.el6ost.noarch

How reproducible:
everytime

Steps to Reproduce:
1. Install Havana on RHEL 6.5 AIO installation. 
2. 
3.

Actual results:
Either the openstack-nova-api or the openstack-nova-metadata-api service is down.

Expected results:
Both services are up and running.

Additional info:
The error from /var/log/nova/metadata-api.log:

2013-10-09 10:54:21.975 4776 INFO nova.network.driver [-] Loading network driver 'nova.network.linux_net'
2013-10-09 10:54:22.036 4776 DEBUG nova.wsgi [-] Loading app metadata from /etc/nova/api-paste.ini load_app /usr/lib/python2.6/site-packages/nova/wsgi.py:484
2013-10-09 10:54:22.076 4776 INFO nova.openstack.common.periodic_task [-] Skipping periodic task _periodic_update_dns because its interval is negative
2013-10-09 10:54:22.079 4776 CRITICAL nova [-] [Errno 98] Address already in use
2013-10-09 10:54:22.079 4776 TRACE nova Traceback (most recent call last):
2013-10-09 10:54:22.079 4776 TRACE nova   File "/usr/bin/nova-api-metadata", line 10, in <module>
2013-10-09 10:54:22.079 4776 TRACE nova     sys.exit(main())
2013-10-09 10:54:22.079 4776 TRACE nova   File "/usr/lib/python2.6/site-packages/nova/cmd/api_metadata.py", line 33, in main
2013-10-09 10:54:22.079 4776 TRACE nova     server = service.WSGIService('metadata')
2013-10-09 10:54:22.079 4776 TRACE nova   File "/usr/lib/python2.6/site-packages/nova/service.py", line 318, in __init__
2013-10-09 10:54:22.079 4776 TRACE nova     max_url_len=max_url_len)
2013-10-09 10:54:22.079 4776 TRACE nova   File "/usr/lib/python2.6/site-packages/nova/wsgi.py", line 123, in __init__
2013-10-09 10:54:22.079 4776 TRACE nova     self._socket = eventlet.listen(bind_addr, family, backlog=backlog)
2013-10-09 10:54:22.079 4776 TRACE nova   File "/usr/lib/python2.6/site-packages/eventlet/convenience.py", line 38, in listen
2013-10-09 10:54:22.079 4776 TRACE nova     sock.bind(addr)
2013-10-09 10:54:22.079 4776 TRACE nova   File "<string>", line 1, in bind
2013-10-09 10:54:22.079 4776 TRACE nova error: [Errno 98] Address already in use
2013-10-09 10:54:22.079 4776 TRACE nova

The error from the nova-api log: 
2013-10-09 11:17:04.520 6048 CRITICAL nova [-] [Errno 98] Address already in use
2013-10-09 11:17:04.520 6048 TRACE nova Traceback (most recent call last):
2013-10-09 11:17:04.520 6048 TRACE nova   File "/usr/bin/nova-api", line 10, in <module>
2013-10-09 11:17:04.520 6048 TRACE nova     sys.exit(main())
2013-10-09 11:17:04.520 6048 TRACE nova   File "/usr/lib/python2.6/site-packages/nova/cmd/api.py", line 51, in main
2013-10-09 11:17:04.520 6048 TRACE nova     server = service.WSGIService(api, use_ssl=should_use_ssl)
2013-10-09 11:17:04.520 6048 TRACE nova   File "/usr/lib/python2.6/site-packages/nova/service.py", line 318, in __init__
2013-10-09 11:17:04.520 6048 TRACE nova     max_url_len=max_url_len)
2013-10-09 11:17:04.520 6048 TRACE nova   File "/usr/lib/python2.6/site-packages/nova/wsgi.py", line 124, in __init__
2013-10-09 11:17:04.520 6048 TRACE nova     self._socket = eventlet.listen(bind_addr, family, backlog=backlog)
2013-10-09 11:17:04.520 6048 TRACE nova   File "/usr/lib/python2.6/site-packages/eventlet/convenience.py", line 38, in listen
2013-10-09 11:17:04.520 6048 TRACE nova     sock.bind(addr)
2013-10-09 11:17:04.520 6048 TRACE nova   File "<string>", line 1, in bind
2013-10-09 11:17:04.520 6048 TRACE nova error: [Errno 98] Address already in use
2013-10-09 11:17:04.520 6048 TRACE nova

Comment 2 Yogev Rabl 2013-10-09 10:42:51 UTC
netstat: 

tcp        0      0 0.0.0.0:8775                0.0.0.0:*                   LISTEN      32327/python

Comment 3 Xavier Queralt 2013-10-11 07:42:14 UTC
As stated on upstream bug, the metadata-api should be already spawned by nova-api if specified in nova.conf:enabled_apis (which is the default when installing with packstack) and that's why you can't start it on a separate process if already running nova-api.

The question is, did packstack start nova-api-metadata? If so it's a problem with packstack that should be fixed. Having said that, I couldn't reproduce this with the latest puddle (i.e. nova-api-metadata was not started after the deploy).

Comment 4 Xavier Queralt 2013-10-29 08:10:51 UTC
Closing as it doesn't look like a bug. Feel free to reopen if you thing there is something we should solve.


Note You need to log in before you can comment on or make changes to this bug.