Bug 1244013 - compute nodes configured to talk to cinder api over publicurl
Summary: compute nodes configured to talk to cinder api over publicurl
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: Director
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ga
: Director
Assignee: Giulio Fidente
QA Contact: nlevinki
URL:
Whiteboard:
Depends On:
Blocks: 1244019
TreeView+ depends on / blocked
 
Reported: 2015-07-16 21:28 UTC by James Slagle
Modified: 2015-08-27 05:45 UTC (History)
8 users (show)

Fixed In Version: openstack-tripleo-heat-templates-0.8.6-44.el7ost
Doc Type: Bug Fix
Doc Text:
Compute nodes queried Keystone for the Cinder publicurl endpoint, regardless of whether they had connectivity. This meant dedicated Compute nodes failed to interact with Cinder API. This fix changes the publicurl endpoint to the internalurl endpoint, which Compute nodes can access.
Clone Of:
: 1244019 (view as bug list)
Environment:
Last Closed: 2015-08-05 13:59:46 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
OpenStack gerrit 202804 None None None Never
Red Hat Product Errata RHEA-2015:1549 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform director Release 2015-08-05 17:49:10 UTC

Description James Slagle 2015-07-16 21:28:19 UTC
Compute nodes are configured to talk to the cinder api over the publicurl:

[root@overcloud-compute-0 nova]# pwd
/etc/nova
[root@overcloud-compute-0 nova]# grep -rin catalog_info=volu
nova.conf:2193:#catalog_info=volumev2:cinderv2:publicURL

I'm using network isolation (singe nic with vlans). I also have a native vlan on my single nic (in case that makes any difference).

My compute nodes can't ping the public virtual ip, because they're not connected to the external network. So common nova -> cinder operations are broken, such as attaching a volume to an instance since the compute node can't talk to the cinder api.

Shouldn't this be configured to use the internalurl?

Comment 3 James Slagle 2015-07-16 21:30:32 UTC
Here's some additional info.

First, the ports and networks on defined in my undercloud neutron:

[stack@host06-rack02-v35 ~]$ source stackrc
[stack@host06-rack02-v35 ~]$ neutron port-list
+--------------------------------------+-------------------------------+-------------------+------------------------------------------------------------------------------------+
| id                                   | name                          | mac_address       | fixed_ips                                                                          |
+--------------------------------------+-------------------------------+-------------------+------------------------------------------------------------------------------------+
| 09f41210-8303-4cfb-b8ba-4a6d0cdf89b5 |                               | b8:ca:3a:63:86:9a | {"subnet_id": "b68ce02a-97c9-4706-98e6-8b14b8c19c68", "ip_address": "10.1.243.13"} |
| 103e44a8-4c0f-47e8-a81e-fe8a83753cbc | internal_api_virtual_ip       | fa:16:3e:c0:a9:f6 | {"subnet_id": "88d84270-f077-4803-9a7c-ae9faed5d821", "ip_address": "172.17.0.10"} |
| 1174b1c7-9d62-4218-966d-a502a7c901e2 | control_virtual_ip            | fa:16:3e:02:d7:83 | {"subnet_id": "b68ce02a-97c9-4706-98e6-8b14b8c19c68", "ip_address": "10.1.243.12"} |
| 12388c01-230f-428e-af5f-b2b0e2e3ddd9 |                               | fa:16:3e:fb:3e:58 | {"subnet_id": "e4423966-1ea0-4ce4-b361-f187032b5411", "ip_address": "10.1.244.13"} |
| 12b1794e-bf02-40ef-b192-2c632bef3e84 |                               | fa:16:3e:20:16:45 | {"subnet_id": "88d84270-f077-4803-9a7c-ae9faed5d821", "ip_address": "172.17.0.15"} |
| 187e6ec2-dd97-4f02-aeb7-d9917940e88b |                               | fa:16:3e:94:bd:ed | {"subnet_id": "b68ce02a-97c9-4706-98e6-8b14b8c19c68", "ip_address": "10.1.243.5"}  |
| 2a0766a2-2698-4d73-a170-96bbe94de65f |                               | fa:16:3e:34:6a:50 | {"subnet_id": "056aea44-7b03-4ecd-abaf-9554260b5cf5", "ip_address": "172.18.0.11"} |
| 30168544-7a23-448a-a190-641537aff3a9 |                               | fa:16:3e:f3:2c:1a | {"subnet_id": "88d84270-f077-4803-9a7c-ae9faed5d821", "ip_address": "172.17.0.12"} |
| 329b98ff-3724-4b88-bf73-42988dcf7d6d |                               | fa:16:3e:81:07:ec | {"subnet_id": "88d84270-f077-4803-9a7c-ae9faed5d821", "ip_address": "172.17.0.13"} |
| 35cf2a7a-6195-4c4f-a45d-036698cf9c75 |                               | fa:16:3e:db:32:25 | {"subnet_id": "056aea44-7b03-4ecd-abaf-9554260b5cf5", "ip_address": "172.18.0.12"} |
| 3b45c4c9-3124-450b-9d04-190c2122a99a |                               | fa:16:3e:6d:ac:04 | {"subnet_id": "93f6ab60-4757-475f-a2da-95225507eb67", "ip_address": "172.16.0.13"} |
| 423e7a95-e0d0-4a83-bc2b-9b1b016d6b17 | storage_virtual_ip            | fa:16:3e:d8:68:fd | {"subnet_id": "056aea44-7b03-4ecd-abaf-9554260b5cf5", "ip_address": "172.18.0.10"} |
| 4aac9ab1-d0b4-4ff8-97a1-410a6053e274 |                               | fa:16:3e:81:12:27 | {"subnet_id": "056aea44-7b03-4ecd-abaf-9554260b5cf5", "ip_address": "172.18.0.13"} |
| 58cf4e89-9a37-47cd-b30a-0e9b00409d7f |                               | b8:ca:3a:61:41:6a | {"subnet_id": "b68ce02a-97c9-4706-98e6-8b14b8c19c68", "ip_address": "10.1.243.15"} |
| 5a66cd5c-e296-4393-9e35-215c8df90b53 |                               | b8:ca:3a:61:42:a2 | {"subnet_id": "b68ce02a-97c9-4706-98e6-8b14b8c19c68", "ip_address": "10.1.243.14"} |
| 5c9fb4c0-5d53-47bf-bf57-801a091da3b0 |                               | fa:16:3e:46:2f:b5 | {"subnet_id": "2206ad4a-0cc6-4576-b7c4-607c383a7075", "ip_address": "172.19.0.12"} |
| 6165f000-e708-4250-8fd4-9b61937dee0f |                               | fa:16:3e:2c:c0:52 | {"subnet_id": "056aea44-7b03-4ecd-abaf-9554260b5cf5", "ip_address": "172.18.0.14"} |
| 68440ea8-e78b-4496-b02e-a3489f14bd56 |                               | fa:16:3e:a4:21:c0 | {"subnet_id": "93f6ab60-4757-475f-a2da-95225507eb67", "ip_address": "172.16.0.11"} |
| 80f8ad25-de21-4a5e-9929-5b094ad70029 |                               | fa:16:3e:d0:a7:35 | {"subnet_id": "2206ad4a-0cc6-4576-b7c4-607c383a7075", "ip_address": "172.19.0.13"} |
| 83f11db1-b852-4a89-ac72-3b10e1e66975 |                               | fa:16:3e:e7:c6:dc | {"subnet_id": "93f6ab60-4757-475f-a2da-95225507eb67", "ip_address": "172.16.0.10"} |
| 8a709948-61aa-44dd-8587-3dc291f91387 | redis_virtual_ip              | fa:16:3e:f6:af:2d | {"subnet_id": "88d84270-f077-4803-9a7c-ae9faed5d821", "ip_address": "172.17.0.11"} |
| 8eff7759-5df4-4591-81a4-ebaad0831412 |                               | fa:16:3e:8c:78:98 | {"subnet_id": "2206ad4a-0cc6-4576-b7c4-607c383a7075", "ip_address": "172.19.0.11"} |
| 9d0b3863-5652-490d-b0f9-633649a51773 |                               | fa:16:3e:83:fd:c8 | {"subnet_id": "e4423966-1ea0-4ce4-b361-f187032b5411", "ip_address": "10.1.244.11"} |
| ac9c5e6f-f673-4b2c-a01c-10408b4f3777 | public_virtual_ip             | fa:16:3e:bb:29:5e | {"subnet_id": "e4423966-1ea0-4ce4-b361-f187032b5411", "ip_address": "10.1.244.10"} |
| d2146762-14f7-4860-8fe6-e4d1c451a83d |                               | fa:16:3e:f0:92:c7 | {"subnet_id": "88d84270-f077-4803-9a7c-ae9faed5d821", "ip_address": "172.17.0.14"} |
| e30c4b7f-e520-4033-a5d3-ad7469cce68d | storage_management_virtual_ip | fa:16:3e:dc:70:26 | {"subnet_id": "2206ad4a-0cc6-4576-b7c4-607c383a7075", "ip_address": "172.19.0.10"} |
| e46848f5-b567-459f-b977-951f406bc589 |                               | b8:ca:3a:63:86:7a | {"subnet_id": "b68ce02a-97c9-4706-98e6-8b14b8c19c68", "ip_address": "10.1.243.16"} |
| f7bd0a60-ac70-4243-aed3-276970da2909 |                               | fa:16:3e:c4:86:c6 | {"subnet_id": "93f6ab60-4757-475f-a2da-95225507eb67", "ip_address": "172.16.0.12"} |
| feb43d8f-93b6-4d9e-97a2-cf39df036f32 |                               | fa:16:3e:37:43:53 | {"subnet_id": "e4423966-1ea0-4ce4-b361-f187032b5411", "ip_address": "10.1.244.12"} |
+--------------------------------------+-------------------------------+-------------------+------------------------------------------------------------------------------------+
[stack@host06-rack02-v35 ~]$ neutron net-list
+--------------------------------------+--------------+----------------------------------------------------+
| id                                   | name         | subnets                                            |
+--------------------------------------+--------------+----------------------------------------------------+
| d174b8e8-f664-4157-bd66-1daffe57ba6b | external     | e4423966-1ea0-4ce4-b361-f187032b5411 10.1.244.0/24 |
| aa5d493b-6afb-4378-aa61-db6e81a44596 | storage      | 056aea44-7b03-4ecd-abaf-9554260b5cf5 172.18.0.0/24 |
| 29666610-9c4e-4a0c-b14b-ed6cbda41e93 | internal_api | 88d84270-f077-4803-9a7c-ae9faed5d821 172.17.0.0/24 |
| fbc61e6b-1880-4f33-99cc-29b5a1c12224 | tenant       | 93f6ab60-4757-475f-a2da-95225507eb67 172.16.0.0/24 |
| 8b82ba28-b49b-465e-a20d-671abefd3dfc | storage_mgmt | 2206ad4a-0cc6-4576-b7c4-607c383a7075 172.19.0.0/24 |
| e8e016af-d8f4-4b92-81d7-aa50a92e30fa | ctlplane     | b68ce02a-97c9-4706-98e6-8b14b8c19c68 10.1.243.0/24 |
+--------------------------------------+--------------+----------------------------------------------------+

Comment 4 James Slagle 2015-07-16 21:31:18 UTC
keystone endpoints for the overcloud:

[stack@host06-rack02-v35 ~]$ source overcloudrc 
[stack@host06-rack02-v35 ~]$ keystone service-list
/usr/lib/python2.7/site-packages/keystoneclient/shell.py:65: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient.
  'python-keystoneclient.', DeprecationWarning)
+----------------------------------+------------+---------------+------------------------------+
|                id                |    name    |      type     |         description          |
+----------------------------------+------------+---------------+------------------------------+
| ebbd842509164f28a66b892a703744bd | ceilometer |    metering   |      Ceilometer Service      |
| c70de70764f447148d40d1b9000536ec |   cinder   |     volume    |    Cinder Volume Service     |
| 44cb57221b014a6dbb94ce4058dca81e |  cinderv2  |    volumev2   |   Cinder Volume Service v2   |
| a3e65e3e67484879b4adc46316d3fed1 |    ec2     |      ec2      |   EC2 Compatibility Layer    |
| 5c5bb3ac7ad6469ba93d910510d40418 |   glance   |     image     |     Glance Image Service     |
| 86bc7806ca164f6ea2cd08670aff5ab7 |    heat    | orchestration |         Heat Service         |
| 84bf92ea1c7545af994784f7df81e6ec |  horizon   |   dashboard   |     OpenStack Dashboard      |
| da9ca3c2cb1e44589234a19ca67e7a1f |  keystone  |    identity   |  Keystone Identity Service   |
| c9136cad4ab34aa3aa8a98de41620b8c |  neutron   |    network    |       Neutron Service        |
| 169eaf644ac34ca296c4b0746ef3e4d3 |    nova    |    compute    |     Nova Compute Service     |
| 70ee067c7bfb4d7ea3b875b371a1f588 |    nova    |   computev3   |   Nova Compute Service v3    |
| 289cc22390224246a328b0750f63cd9c |   swift    |  object-store | Swift Object Storage Service |
+----------------------------------+------------+---------------+------------------------------+
[stack@host06-rack02-v35 ~]$ keystone endpoint-list
/usr/lib/python2.7/site-packages/keystoneclient/shell.py:65: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient.
  'python-keystoneclient.', DeprecationWarning)
+----------------------------------+-----------+-----------------------------------------------+-----------------------------------------------+------------------------------------------+----------------------------------+
|                id                |   region  |                   publicurl                   |                  internalurl                  |                 adminurl                 |            service_id            |
+----------------------------------+-----------+-----------------------------------------------+-----------------------------------------------+------------------------------------------+----------------------------------+
| 04d58f4b805a4047bd3895d80bbe16da | regionOne |            http://10.1.244.10:9696/           |            http://172.17.0.10:9696/           |         http://172.17.0.10:9696/         | c9136cad4ab34aa3aa8a98de41620b8c |
| 1ed2dee4001546f3b5d36fc64069a35f | regionOne | http://10.1.244.10:8080/v1/AUTH_%(tenant_id)s | http://172.18.0.10:8080/v1/AUTH_%(tenant_id)s |        http://172.18.0.10:8080/v1        | 289cc22390224246a328b0750f63cd9c |
| 33b1a4008d5c4d9a9aaf82dc44aa4a23 | regionOne |    http://10.1.244.10:8004/v1/%(tenant_id)s   |    http://172.17.0.10:8004/v1/%(tenant_id)s   | http://172.17.0.10:8004/v1/%(tenant_id)s | 86bc7806ca164f6ea2cd08670aff5ab7 |
| 37562896787541b3a3accafe36e40666 | regionOne |            http://10.1.244.10:8777/           |            http://172.17.0.10:8777/           |         http://172.17.0.10:8777/         | ebbd842509164f28a66b892a703744bd |
| 4ca5724254bb42f9b2daa1fe32541f79 | regionOne |    http://10.1.244.10:8776/v2/%(tenant_id)s   |    http://172.17.0.10:8776/v2/%(tenant_id)s   | http://172.17.0.10:8776/v2/%(tenant_id)s | 44cb57221b014a6dbb94ce4058dca81e |
| 4fd64ce6c57a40c8a54cf442c45cc917 | regionOne |        http://10.1.244.10:80/dashboard/       |        http://10.1.244.10:80/dashboard/       |  http://10.1.244.10:80/dashboard/admin   | 84bf92ea1c7545af994784f7df81e6ec |
| 6146966b9c9c472a863a3dd8b50b0948 | regionOne |            http://10.1.244.10:9292/           |            http://172.18.0.10:9292/           |         http://172.18.0.10:9292/         | 5c5bb3ac7ad6469ba93d910510d40418 |
| 7496650fe80248388e9df36e44b8b831 | regionOne |           http://10.1.244.10:8774/v3          |           http://172.17.0.10:8774/v3          |        http://172.17.0.10:8774/v3        | 70ee067c7bfb4d7ea3b875b371a1f588 |
| 9f7e97daa2c74b81b636209330a2a634 | regionOne |    http://10.1.244.10:8776/v1/%(tenant_id)s   |    http://172.17.0.10:8776/v1/%(tenant_id)s   | http://172.17.0.10:8776/v1/%(tenant_id)s | c70de70764f447148d40d1b9000536ec |
| a52cf6bad10a4474b4773073f52bafb9 | regionOne |     http://10.1.244.10:8773/services/Cloud    |     http://10.1.244.10:8773/services/Cloud    |  http://10.1.244.10:8773/services/Admin  | a3e65e3e67484879b4adc46316d3fed1 |
| c91911a9f3494266b1f860864b88f2ef | regionOne |          http://10.1.244.10:5000/v2.0         |          http://10.1.244.10:5000/v2.0         |      http://10.1.244.10:35357/v2.0       | da9ca3c2cb1e44589234a19ca67e7a1f |
| fbcf966813bf4b2586105a3380572aa9 | regionOne |    http://10.1.244.10:8774/v2/$(tenant_id)s   |    http://172.17.0.10:8774/v2/$(tenant_id)s   | http://172.17.0.10:8774/v2/$(tenant_id)s | 169eaf644ac34ca296c4b0746ef3e4d3 |
+----------------------------------+-----------+-----------------------------------------------+-----------------------------------------------+------------------------------------------+----------------------------------+

Comment 5 James Slagle 2015-07-16 21:33:48 UTC
Here's the traceback from /var/log/nova/nova-compute.log on the compute nodes showing it can't connect to the cinder api over the public vip:

2015-07-16 16:57:54.698 16438 ERROR oslo_messaging.rpc.dispatcher [req-a3d6956d-d065-4a88-8de3-e20c9742ddaf 3001eee18ff7450b997e8e8454064b32 9bb1c6b730a74de7aa47e284af922b9c - - -] Exception during message handling: Unable to establish connection to http://10.1.244.10:8776/v2/9bb1c6b730a74de7aa47e284af922b9c/volumes/163e48e7-6b7d-479f-8d7f-fa7d4f9df557/action
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher Traceback (most recent call last):
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     executor_callback))
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     executor_callback)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, in _do_dispatch
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     result = func(ctxt, **new_args)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6591, in attach_volume
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     bdm=bdm)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 443, in decorated_function
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     payload)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     return f(self, context, *args, **kw)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 327, in decorated_function
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     LOG.warning(msg, e, instance_uuid=instance_uuid)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 298, in decorated_function
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 355, in decorated_function
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     kwargs['instance'], e, sys.exc_info())
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 343, in decorated_function
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4775, in attach_volume
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     do_attach_volume(context, instance, driver_bdm)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 445, in inner
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     return f(*args, **kwargs)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4773, in do_attach_volume
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     bdm.destroy()
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4770, in do_attach_volume
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     return self._attach_volume(context, instance, driver_bdm)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4793, in _attach_volume
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     self.volume_api.unreserve_volume(context, bdm.volume_id)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 214, in wrapper
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     res = method(self, ctx, volume_id, *args, **kwargs)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 342, in unreserve_volume
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     cinderclient(context).volumes.unreserve(volume_id)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py", line 414, in unreserve
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     return self._action('os-unreserve', volume)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py", line 375, in _action
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     return self.api.client.post(url, body=body)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 118, in post
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     return self._cs_request(url, 'POST', **kwargs)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 112, in _cs_request
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     return self.request(url, method, **kwargs)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 103, in request
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     **kwargs)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/keystoneclient/adapter.py", line 206, in request
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     resp = super(LegacyJsonAdapter, self).request(*args, **kwargs)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/keystoneclient/adapter.py", line 95, in request
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     return self.session.request(url, method, **kwargs)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/keystoneclient/utils.py", line 318, in inner
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     return func(*args, **kwargs)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 382, in request
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     resp = send(**kwargs)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 439, in _send_request
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     **kwargs)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 439, in _send_request
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     **kwargs)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 439, in _send_request
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     **kwargs)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 426, in _send_request
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher     raise exceptions.ConnectionRefused(msg)
2015-07-16 16:57:54.698 16438 TRACE oslo_messaging.rpc.dispatcher ConnectionRefused: Unable to establish connection to http://10.1.244.10:8776/v2/9bb1c6b730a74de7aa47e284af922b9c/volumes/163e48e7-6b7d-479f-8d7f-fa7d4f9df557/action



Indeed I can't ping that IP:
[root@overcloud-compute-0 nova]# ping 10.1.244.10
PING 10.1.244.10 (10.1.244.10) 56(84) bytes of data.
^C
--- 10.1.244.10 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 3999ms


I can however ping the interalurl vip:
[root@overcloud-compute-0 nova]# ping 172.17.0.10
PING 172.17.0.10 (172.17.0.10) 56(84) bytes of data.
64 bytes from 172.17.0.10: icmp_seq=1 ttl=64 time=0.836 ms
64 bytes from 172.17.0.10: icmp_seq=2 ttl=64 time=0.271 ms
64 bytes from 172.17.0.10: icmp_seq=3 ttl=64 time=0.247 ms
^C
--- 172.17.0.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.247/0.451/0.836/0.272 ms

Comment 6 Giulio Fidente 2015-07-16 21:38:37 UTC
puppet-nova doesn't even seem to support setting catalog_info at the moment ... yet more services are affected by same issue

services running on the controllers won't be affected in the short term, but nova-compute obviously is

going to try a OPM/THT patch

Comment 7 James Slagle 2015-07-16 21:41:28 UTC
some other data points:

here's the network-environment.yaml i deployed with:
http://file.rdu.redhat.com/~jslagle/scale-lab/network-environment.yaml

the only thing modified under nic-configs is the controller.yaml:
http://file.rdu.redhat.com/~jslagle/scale-lab/controller.yaml

Comment 8 Dan Prince 2015-07-17 01:46:53 UTC
When using network isolation the compute nodes don't have an IP address on the external network.

However, when I've tested Nova compute -> Cinder API access in the past (using network isolation) my compute nodes default route (via the ctlplane) could still access the public network.

So basically I'm thinking we could simply add a configuration note such that the default gateway router for the ctlplane should support traffic to the external (public) network for this? Perhaps not ideal, but I think this is the quickest fix.

-----

Using the internal_uri Cinder endpoint I don't think is going to work because that is probably pointing to the ctlplane/provisioning network right? Without network isolation that would work fine... but with network isolation the Cinder API VIP only binds to 2 ports on the internal_api and public/external networks. So unless we are sending the internal_api IP address of the controller node (aka the Cinder API host) when we configure keystone endpoints I don't think this proposed solution is going to work at the moment. I would like to move this direction... I just think there is more configuration work to be done to make keystone endpoints more closely align with network isolation first.

Comment 9 James Slagle 2015-07-17 11:43:50 UTC
(In reply to Dan Prince from comment #8)
> When using network isolation the compute nodes don't have an IP address on
> the external network.
> 
> However, when I've tested Nova compute -> Cinder API access in the past
> (using network isolation) my compute nodes default route (via the ctlplane)
> could still access the public network.
> 
> So basically I'm thinking we could simply add a configuration note such that
> the default gateway router for the ctlplane should support traffic to the
> external (public) network for this? Perhaps not ideal, but I think this is
> the quickest fix.
> 
> -----
> 
> Using the internal_uri Cinder endpoint I don't think is going to work
> because that is probably pointing to the ctlplane/provisioning network
> right? Without network isolation that would work fine... but with network
> isolation the Cinder API VIP only binds to 2 ports on the internal_api and
> public/external networks. So unless we are sending the internal_api IP
> address of the controller node (aka the Cinder API host) when we configure
> keystone endpoints I don't think this proposed solution is going to work at
> the moment. I would like to move this direction... I just think there is
> more configuration work to be done to make keystone endpoints more closely
> align with network isolation first.


our internal endpoints point to whatever you have set as the internalapi network. see the comments above where i show the endpoints. This is why jistr added these as stack outputs in:
https://review.openstack.org/#/c/199554/

i've tested switching over to use internalURL on my compute nodes, and it fixes the issue so that i can successfully attach cinder volumes.

Comment 10 James Slagle 2015-07-17 11:45:11 UTC
looks like there were similar bugs for staypuft and ofi:

https://bugzilla.redhat.com/show_bug.cgi?id=1240362 
https://bugzilla.redhat.com/show_bug.cgi?id=1190284

Comment 11 Dan Prince 2015-07-17 12:15:22 UTC
Gotcha. And I do agree that using internal endpoints is moving in the right direction. That is exactly my concern though. We aren't automatically setting up the internal endpoints for specific services anywhere are we? Specifically:

http://git.openstack.org/cgit/openstack/tripleo-incubator/tree/scripts/devtest_overcloud.sh#n588

https://github.com/rdo-management/python-rdomanager-oscplugin/blob/master/rdomanager_oscplugin/v1/overcloud_deploy.py#L412

Are we suggesting that manual configuration of internal endpoints is going to need to be used if you are using the network isolation (for now)?

Comment 12 Dan Prince 2015-07-17 12:33:28 UTC
Okay, So there is an unlanded patch here that I think is required to make the internal endpoint fully automated:

https://review.gerrithub.io/#/c/238901/6/rdomanager_oscplugin/v1/overcloud_deploy.py

We also lack an equivalent fix to upstream TripleO tooling (setup-endpoints). Until we have that in both places I'm not sure the proposed t-h-t fix should land upstream... but it would probably be okay for downstream so long as we have the oscplugic login too.

Comment 15 nlevinki 2015-07-21 09:00:45 UTC
/etc/nova/nova.conf on the compute node
as you can see it is configured to talk on internalURL.
# Info to match when looking for cinder in the service
# catalog. Format is: separated values of the form:
# <service_type>:<service_name>:<endpoint_type> (string value)
#catalog_info=volumev2:cinderv2:publicURL
catalog_info=volumev2:cinderv2:internalURL

Comment 17 errata-xmlrpc 2015-08-05 13:59:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2015:1549


Note You need to log in before you can comment on or make changes to this bug.