Bug 1264582 - Cinder volume fail to attach/dettach
Summary: Cinder volume fail to attach/dettach
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 5.0 (RHEL 7)
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 5.0 (RHEL 7)
Assignee: Eric Harney
QA Contact: nlevinki
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-09-18 21:45 UTC by Jeremy
Modified: 2019-08-15 05:28 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-09-29 23:40:20 UTC
Target Upstream Version:
Embargoed:
jmelvin: needinfo-


Attachments (Terms of Use)

Comment 4 Sergey Gotliv 2015-09-21 06:24:33 UTC
Robb/Jeremy,

According to the bug description all your troubles happened on the compute node, at least that what I understand. Please, confirm that.

Comment 5 Robb Manes 2015-09-21 13:17:36 UTC
Hi Sergey;

Sure; let me summarize what we were able to determine over the weekend.

In a three-node controller / five node compute setup, with cinder running on the controllers (backing storage provided by iSCSI to the compute nodes, as dictated by cinder on the controllers), suddenly a series of volumes were unable to be attached or detached from a running instance.  Only a single instance was tested to my knowledge and I don't know if any other instances were running on this particular compute node.

The crux of this was controller01, wherein it was serving compute01 as it's cinder `client`, where the instance was hosted at that time.  During operation, the controller, which is part of a high-availability cinder setup via pacemaker, hit a libqp bug that caused the controller01 node to reboot as described in this document:

https://access.redhat.com/solutions/1415463

During the time of the reboot, the state of the storage from horizon could not be altered.  The service successfully migrated over to controller02, the pacemaker standby node, and now the question remains as to why, after migration, the new controller running cinder was unable to alter the state of the storage.

The condition was cleared by rebooting the instance, which to my knowledge was a RHEL6 VM / KVM instance.

I'll test this myself as well, just to see if this is reproducible on similar versions.

Comment 6 Sergey Gotliv 2015-09-24 06:45:51 UTC
Robb/Jeremy,

According to the bug description, Nova compute node lost iSCSI connection to the iSCSI targets. Most probably it has nothing to do with the Cinder. Cinder is responsible to create volumes and export their connection information to Nova, from that point Nova is directly connected to the iSCSI targets. So if detach operations fails on the Nova compute node that probably means that something happened between compute node and iSCSI targets.

Unfortunately, I can't say more relevant nova compute and messages logs from the compute node. Sosreports contain only controller logs (nova-api and scheduler). Please, get and upload sosreports from the compute nodes ASAP.

Comment 7 Robb Manes 2015-09-24 17:41:49 UTC
Hi Sergey:

> According to the bug description, Nova compute node lost iSCSI connection to
> the iSCSI targets.

Not exactly; the iSCSI target appears to have remain connected to the compute01 host despite the fact that it's Cinder controller has gone offline due to the node reboot.  The issue only appears, in our case, where once controller02 took over Cinder from the new-rebooting controller01, we were unable to attach/detach the storage from the compute01 host until the instance was rebooted.

We unfortunately weren't there when they rebooted the instance, unfortunately.  This could be tested, I think, by having an HA Cinder setup, backing storage presented to the compute nodes via iSCSI, and then using pcs to fence the currently active Cinder node and checking the results/ability to attach/detach shared storage from running instances.  Of course, if this is racey at all, we may not catch it in the same scenario/operations it was in the middle of when the currently active Cinder service went down, so perhaps this may be difficult to reproduce as described.

> Unfortunately, I can't say more relevant nova compute and messages logs from
> the compute node. Sosreports contain only controller logs (nova-api and
> scheduler). Please, get and upload sosreports from the compute nodes ASAP.

Jeremy, could you do this for us please?  It'd probably be best to host them on a system for Sergey to look at them, as we also collected all of /var during the meantime.  Thanks!

Comment 8 Sergey Gotliv 2015-09-27 13:21:41 UTC
Robb/Jeremy,

The link to the controller logs in collab shell from comment #1 doesn't work for me anymore. Can you, please, restore it?


Note You need to log in before you can comment on or make changes to this bug.