Bug 1661022

Summary: [RFE][cinder]: Attach a single volume to multiple hosts
Product: Red Hat OpenStack Reporter: Eric Harney <eharney>
Component: openstack-cinderAssignee: Eric Harney <eharney>
Status: CLOSED ERRATA QA Contact: Tzach Shefi <tshefi>
Severity: medium Docs Contact: Laura Marsh <lmarsh>
Priority: high    
Version: 15.0 (Stein)CC: abishop, gcharot, geguileo, gregraka, jobernar, lmarsh, pgrist, scohen
Target Milestone: Upstream M2Keywords: FutureFeature, Triaged
Target Release: 15.0 (Stein)   
Hardware: Unspecified   
OS: Unspecified   
URL: https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume
Whiteboard:
Fixed In Version: openstack-cinder-14.0.1-0.20190420004000.84d3d12.el8ost Doc Type: Enhancement
Doc Text:
In Red Hat OpenStack Platform 15, if the back end driver supports it, you can now simultaneously attach a volume to multiple machines for both the Block Storage service (cinder) and the Compute service (nova). This feature addresses the use case for clustered application workloads that typically requires active/active or active/standby scenarios.
Story Points: ---
Clone Of: 1033180
: 1746984 (view as bug list) Environment:
Last Closed: 2019-09-21 11:19:41 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1033180, 1033185, 1623365, 1625971    
Bug Blocks: 1691369, 1692542, 1746984, 1761502    

Description Eric Harney 2018-12-19 20:54:29 UTC
+++ This bug was initially created as a clone of Bug #1033180 +++

Cloned from launchpad blueprint https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume.

Description:

Currently, the existing block storage (cinder) drivers allow for only a single volume to be attached to a single VM instance via iSCSI or Fibre Channel. To support a cluster, a single volume would need to be exported to multiple host. In this case vCenter will be used as an example, but any Hypervisor that supports clusters could require this functionality.

The Cinder etherpad https://etherpad.openstack.org/summit-havana-cinder-multi-attach-and-ro-volumes was present at the Havana Design summit resulted in the majority of the work to be completed in nova project. This work includes the looping mechanism that's responsible for calling the cinder drivers for each host in the cluster.

The Cinder work is not yet determined but would require minor work in tracking that a volume has multiple host attached to it.  Possibly changing the database entries for what volume is attached to a single host to a list of host. More work may be required once test have been ran to verify the data flow between multiple attach points.

This blueprint in for the limited cinder work that would be required from the following Nova blueprint: https://blueprints.launchpad.net/nova/+spec/fc-support-for-vcenter-driver

Specification URL (additional information):

None

Comment 12 Tzach Shefi 2019-07-15 15:53:23 UTC
Verified on:
puppet-cinder-14.4.1-0.20190420083336.1cf0604.el8ost.noarch
openstack-cinder-14.0.1-0.20190607000407.23d1a72.el8ost.noarch
python3-cinderclient-4.2.0-0.20190520060354.953243d.el8ost.noarch
python3-cinder-14.0.1-0.20190607000407.23d1a72.el8ost.noarch

Executed test plan on a mix of netapp iscsi and LVM backend:

Ideally I'd wish to use a real multi R/W app, not having one at this state I used ext4. 

Hit a few corner cases which shouldn't block RFE verification:
Negative test case that failed:
https://bugzilla.redhat.com/show_bug.cgi?id=1729755

On Netapp iscsi backend hit a detach issue, on one of my cases.
https://bugzilla.redhat.com/show_bug.cgi?id=1729898


Another case (blocked) which I'm awaiting dev replay, 
failing to extend attach MA volume, unsure if supported or not, 
again more of a corner case.

Comment 21 errata-xmlrpc 2019-09-21 11:19:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:2811