Bug 1114022
| Summary: | ConnectionError when attaching volume to instance | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Jeff Dexter <jdexter> | ||||||
| Component: | openstack-nova | Assignee: | Nikola Dipanov <ndipanov> | ||||||
| Status: | CLOSED NOTABUG | QA Contact: | Ami Jeain <ajeain> | ||||||
| Severity: | urgent | Docs Contact: | |||||||
| Priority: | unspecified | ||||||||
| Version: | 4.0 | CC: | benglish, eglynn, jdexter, ndipanov, yeylon | ||||||
| Target Milestone: | --- | ||||||||
| Target Release: | 5.0 (RHEL 7) | ||||||||
| Hardware: | x86_64 | ||||||||
| OS: | Linux | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||||
| Doc Text: | Story Points: | --- | |||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2014-07-23 14:59:34 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Embargoed: | |||||||||
| Attachments: |
|
||||||||
Created attachment 912812 [details]
pcs setup 2
2014-06-26 14:55:25.144 60554 WARNING urllib3.connectionpool [-] Retrying (0 attempts remain) after connection broken by 'BadStatusLine('',)': /v1/faa84109e5d34c61adf39ce8d8921471/volumes/d1553f38-efc9-497a-945b-fb2e51083097/action
2014-06-26 14:55:25.144 60554 ERROR nova.compute.manager [req-047c9e12-1e74-4f6c-a9f4-bc8bb55f0cb7 d1cda3288e154c3e99690d29e56f4414 faa84109e5d34c61adf39ce8d8921471] [instance: 1039761e-641e-47ad-b375-ad0e884b9b53] Failed to connect to volume d1553f38-efc9-497a-945b-fb2e51083097 while attaching at /dev/vdb
2014-06-26 14:55:25.144 60554 TRACE nova.compute.manager [instance: 1039761e-641e-47ad-b375-ad0e884b9b53] Traceback (most recent call last):
2014-06-26 14:55:25.144 60554 TRACE nova.compute.manager [instance: 1039761e-641e-47ad-b375-ad0e884b9b53] File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3668, in _attach_volume
2014-06-26 14:55:25.144 60554 TRACE nova.compute.manager [instance: 1039761e-641e-47ad-b375-ad0e884b9b53] connector)
2014-06-26 14:55:25.144 60554 TRACE nova.compute.manager [instance: 1039761e-641e-47ad-b375-ad0e884b9b53] File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 176, in wrapper
2014-06-26 14:55:25.144 60554 TRACE nova.compute.manager [instance: 1039761e-641e-47ad-b375-ad0e884b9b53] res = method(self, ctx, volume_id, *args, **kwargs)
2014-06-26 14:55:25.144 60554 TRACE nova.compute.manager [instance: 1039761e-641e-47ad-b375-ad0e884b9b53] File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 274, in initialize_connection
2014-06-26 14:55:25.144 60554 TRACE nova.compute.manager [instance: 1039761e-641e-47ad-b375-ad0e884b9b53] connector)
2014-06-26 14:55:25.144 60554 TRACE nova.compute.manager [instance: 1039761e-641e-47ad-b375-ad0e884b9b53] File "/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py", line 321, in initialize_connection
2014-06-26 14:55:25.144 60554 TRACE nova.compute.manager [instance: 1039761e-641e-47ad-b375-ad0e884b9b53] {'connector': connector})[1]['connection_info']
2014-06-26 14:55:25.144 60554 TRACE nova.compute.manager [instance: 1039761e-641e-47ad-b375-ad0e884b9b53] File "/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py", line 250, in _action
2014-06-26 14:55:25.144 60554 TRACE nova.compute.manager [instance: 1039761e-641e-47ad-b375-ad0e884b9b53] return self.api.client.post(url, body=body)
2014-06-26 14:55:25.144 60554 TRACE nova.compute.manager [instance: 1039761e-641e-47ad-b375-ad0e884b9b53] File "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 210, in post
2014-06-26 14:55:25.144 60554 TRACE nova.compute.manager [instance: 1039761e-641e-47ad-b375-ad0e884b9b53] return self._cs_request(url, 'POST', **kwargs)
2014-06-26 14:55:25.144 60554 TRACE nova.compute.manager [instance: 1039761e-641e-47ad-b375-ad0e884b9b53] File "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 199, in _cs_request
2014-06-26 14:55:25.144 60554 TRACE nova.compute.manager [instance: 1039761e-641e-47ad-b375-ad0e884b9b53] raise exceptions.ConnectionError(msg)
2014-06-26 14:55:25.144 60554 TRACE nova.compute.manager [instance: 1039761e-641e-47ad-b375-ad0e884b9b53] ConnectionError: Unable to establish connection: HTTPConnectionPool(host='172.20.64.20', port=8776): Max retries exceeded with url: /v1/faa84109e5d34c61adf39ce8d8921471/volumes/d1553f38-efc9-497a-945b-fb2e51083097/action
2014-06-26 14:55:25.144 60554 TRACE nova.compute.manager [instance: 1039761e-641e-47ad-b375-ad0e884b9b53]
2014-06-26 14:55:25.432 60554 ERROR nova.openstack.common.rpc.amqp [req-047c9e12-1e74-4f6c-a9f4-bc8bb55f0cb7 d1cda3288e154c3e99690d29e56f4414 faa84109e5d34c61adf39ce8d8921471] Exception during message handling
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last):
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 461, in _process_data
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp **args)
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", line 172, in dispatch
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp result = getattr(proxyobj, method)(ctxt, **kwargs)
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/exception.py", line 90, in wrapped
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp payload)
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/exception.py", line 73, in wrapped
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp return f(self, context, *args, **kw)
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 244, in decorated_function
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp pass
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 230, in decorated_function
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp return function(self, context, *args, **kwargs)
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 272, in decorated_function
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp e, sys.exc_info())
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 259, in decorated_function
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp return function(self, context, *args, **kwargs)
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3657, in attach_volume
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp context, instance, mountpoint)
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3652, in attach_volume
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp mountpoint, instance)
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3676, in _attach_volume
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp self.volume_api.unreserve_volume(context, volume_id)
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3668, in _attach_volume
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp connector)
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 176, in wrapper
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp res = method(self, ctx, volume_id, *args, **kwargs)
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 274, in initialize_connection
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp connector)
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py", line 321, in initialize_connection
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp {'connector': connector})[1]['connection_info']
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py", line 250, in _action
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp return self.api.client.post(url, body=body)
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 210, in post
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp return self._cs_request(url, 'POST', **kwargs)
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 199, in _cs_request
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp raise exceptions.ConnectionError(msg)
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp ConnectionError: Unable to establish connection: HTTPConnectionPool(host='172.20.64.20', port=8776): Max retries exceeded with url: /v1/faa84109e5d34c61adf39ce8d8921471/volumes/d1553f38-efc9-497a-945b-fb2e51083097/action
2014-06-26 14:55:25.432 60554 TRACE nova.openstack.common.rpc.amqp
My concern is that the cluster is not setup correctly and we are hitting an issue with that since the 172.20.64.20 is the VIP address. Also it may be coincidential but seeing it happen on the 3rd try on to different test is making me think it may be on of the 3 controller nodes that is not working quite right. Jeff, According to the trace you posted in comment #2 - it seems that the issue is the connectivity between the compute host and the host running the Cinder API service (172.20.64.20). Is that the correct host? In case it is not - it is a probably a matter matter of misconfiguration (worth checking: cinder_catalog_info or cinder_endpoint_template as this conf option if set will override the Cinder service catalog lookups). However if such a failure leaves some of the system in the inconsistent state - that might be a bug worth looking into. You say that: "When it fails the SAN is still mapping the drive to the compute host, so when we try to reattach on either same host node or different node, it failes becuase the mapping is already there." Could you clarify a bit more what you mean by 'SAN still mapping the drive' and also what is the exact failure in that case. Nikola we can close this as notabug |
Created attachment 912811 [details] pcs setup Description of problem: When attaching volumes to VMs we are randomly (on 2 different nodes) having it fail. When it fails the SAN is still mapping the drive to the compute host, so when we try to reattach on either same host node or different node, it failes becuase the mapping is already there. This issue also happens when trying dettach volumes as well. Controllers are in cluster using PCS, Consulting setup enviroment. Version-Release number of selected component (if applicable): RHOS4.0 A4 release How reproducible: at current 1 out of 3 attempts Steps to Reproduce: 1.create vms 2.create volumes 3.attach volumes to vm Actual results: Volume is left as available, and mapping is created on san Expected results: volume is attached to instance Additional info: