Bug 1017058 - openstack nova: nova-api and nova-metadata-api services are using the same port
openstack nova: nova-api and nova-metadata-api services are using the same port
Status: CLOSED NOTABUG
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova (Show other bugs)
4.0
x86_64 Linux
unspecified Severity high
: ---
: 4.0
Assigned To: Xavier Queralt
Ami Jeain
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-10-09 04:25 EDT by Yogev Rabl
Modified: 2015-08-17 08:09 EDT (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-10-29 04:10:51 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
yrabl: internal‑review?


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Launchpad 1237334 None None None Never

  None (edit)
Description Yogev Rabl 2013-10-09 04:25:23 EDT
Description of problem:
The services nova-api and the nova-metadata-api are both using the same port, 8775. 
Thus, the services are 'competing' for the port and one of them will not work. 

Version-Release number of selected component (if applicable):

Red Hat Enterprise Linux Server release 6.5 Beta (Santiago)

python-novaclient-2.15.0-1.el6ost.noarch
python-nova-2013.2-0.24.rc1.el6ost.noarch
openstack-nova-network-2013.2-0.24.rc1.el6ost.noarch
openstack-nova-common-2013.2-0.24.rc1.el6ost.noarch
openstack-nova-console-2013.2-0.24.rc1.el6ost.noarch
openstack-nova-compute-2013.2-0.24.rc1.el6ost.noarch
openstack-nova-conductor-2013.2-0.24.rc1.el6ost.noarch
openstack-nova-novncproxy-2013.2-0.24.rc1.el6ost.noarch
openstack-nova-scheduler-2013.2-0.24.rc1.el6ost.noarch
openstack-nova-api-2013.2-0.24.rc1.el6ost.noarch
openstack-nova-cert-2013.2-0.24.rc1.el6ost.noarch

How reproducible:
everytime

Steps to Reproduce:
1. Install Havana on RHEL 6.5 AIO installation. 
2. 
3.

Actual results:
Either the openstack-nova-api or the openstack-nova-metadata-api service is down.

Expected results:
Both services are up and running.

Additional info:
The error from /var/log/nova/metadata-api.log:

2013-10-09 10:54:21.975 4776 INFO nova.network.driver [-] Loading network driver 'nova.network.linux_net'
2013-10-09 10:54:22.036 4776 DEBUG nova.wsgi [-] Loading app metadata from /etc/nova/api-paste.ini load_app /usr/lib/python2.6/site-packages/nova/wsgi.py:484
2013-10-09 10:54:22.076 4776 INFO nova.openstack.common.periodic_task [-] Skipping periodic task _periodic_update_dns because its interval is negative
2013-10-09 10:54:22.079 4776 CRITICAL nova [-] [Errno 98] Address already in use
2013-10-09 10:54:22.079 4776 TRACE nova Traceback (most recent call last):
2013-10-09 10:54:22.079 4776 TRACE nova   File "/usr/bin/nova-api-metadata", line 10, in <module>
2013-10-09 10:54:22.079 4776 TRACE nova     sys.exit(main())
2013-10-09 10:54:22.079 4776 TRACE nova   File "/usr/lib/python2.6/site-packages/nova/cmd/api_metadata.py", line 33, in main
2013-10-09 10:54:22.079 4776 TRACE nova     server = service.WSGIService('metadata')
2013-10-09 10:54:22.079 4776 TRACE nova   File "/usr/lib/python2.6/site-packages/nova/service.py", line 318, in __init__
2013-10-09 10:54:22.079 4776 TRACE nova     max_url_len=max_url_len)
2013-10-09 10:54:22.079 4776 TRACE nova   File "/usr/lib/python2.6/site-packages/nova/wsgi.py", line 123, in __init__
2013-10-09 10:54:22.079 4776 TRACE nova     self._socket = eventlet.listen(bind_addr, family, backlog=backlog)
2013-10-09 10:54:22.079 4776 TRACE nova   File "/usr/lib/python2.6/site-packages/eventlet/convenience.py", line 38, in listen
2013-10-09 10:54:22.079 4776 TRACE nova     sock.bind(addr)
2013-10-09 10:54:22.079 4776 TRACE nova   File "<string>", line 1, in bind
2013-10-09 10:54:22.079 4776 TRACE nova error: [Errno 98] Address already in use
2013-10-09 10:54:22.079 4776 TRACE nova

The error from the nova-api log: 
2013-10-09 11:17:04.520 6048 CRITICAL nova [-] [Errno 98] Address already in use
2013-10-09 11:17:04.520 6048 TRACE nova Traceback (most recent call last):
2013-10-09 11:17:04.520 6048 TRACE nova   File "/usr/bin/nova-api", line 10, in <module>
2013-10-09 11:17:04.520 6048 TRACE nova     sys.exit(main())
2013-10-09 11:17:04.520 6048 TRACE nova   File "/usr/lib/python2.6/site-packages/nova/cmd/api.py", line 51, in main
2013-10-09 11:17:04.520 6048 TRACE nova     server = service.WSGIService(api, use_ssl=should_use_ssl)
2013-10-09 11:17:04.520 6048 TRACE nova   File "/usr/lib/python2.6/site-packages/nova/service.py", line 318, in __init__
2013-10-09 11:17:04.520 6048 TRACE nova     max_url_len=max_url_len)
2013-10-09 11:17:04.520 6048 TRACE nova   File "/usr/lib/python2.6/site-packages/nova/wsgi.py", line 124, in __init__
2013-10-09 11:17:04.520 6048 TRACE nova     self._socket = eventlet.listen(bind_addr, family, backlog=backlog)
2013-10-09 11:17:04.520 6048 TRACE nova   File "/usr/lib/python2.6/site-packages/eventlet/convenience.py", line 38, in listen
2013-10-09 11:17:04.520 6048 TRACE nova     sock.bind(addr)
2013-10-09 11:17:04.520 6048 TRACE nova   File "<string>", line 1, in bind
2013-10-09 11:17:04.520 6048 TRACE nova error: [Errno 98] Address already in use
2013-10-09 11:17:04.520 6048 TRACE nova
Comment 2 Yogev Rabl 2013-10-09 06:42:51 EDT
netstat: 

tcp        0      0 0.0.0.0:8775                0.0.0.0:*                   LISTEN      32327/python
Comment 3 Xavier Queralt 2013-10-11 03:42:14 EDT
As stated on upstream bug, the metadata-api should be already spawned by nova-api if specified in nova.conf:enabled_apis (which is the default when installing with packstack) and that's why you can't start it on a separate process if already running nova-api.

The question is, did packstack start nova-api-metadata? If so it's a problem with packstack that should be fixed. Having said that, I couldn't reproduce this with the latest puddle (i.e. nova-api-metadata was not started after the deploy).
Comment 4 Xavier Queralt 2013-10-29 04:10:51 EDT
Closing as it doesn't look like a bug. Feel free to reopen if you thing there is something we should solve.

Note You need to log in before you can comment on or make changes to this bug.