Red Hat Bugzilla – Bug 1033185
[RFE][nova]: a shared volume can be accessed by many instances
Last modified: 2018-02-09 10:11:15 EST
Cloning to Nova as there are impacts here too. No Nova-specific blueprint exists (to my knowledge) covering these impacts at this time.
+++ This bug was initially created as a clone of Bug #1033178 +++
Cloned from launchpad blueprint https://blueprints.launchpad.net/cinder/+spec/shared-volume.
Provide the ability to attach a single volume to multiple instances simultaneously. In order to do this R/W there are a number of issues involved with respect to data corruption etc. As a first pass it would be very useful to introduce a Read Only option that could be specified during attach and used to allow simultaneous attach to multiple instances.
Most of this will require work in Nova/Compute but there will need to be some comprehension added to Cinder, and perhaps the ability to mark a volume as Read Only might be useful as well.
This R/O volumes could be especially useful for things like Images and even D2D backups.
There's also a need for FC environments to multi-attach in general.
Specification URL (additional information):
I'm actually currently working this against https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume but both it and the blueprint in the description overlap.
Note that one of the use cases here is clusters with SCSI3 persistent reservation based fencing, which expands the scope quite a bit to include virtio-scsi and NPIV. Libvirt is most of the way through the implementation of expanded support for NPIV which might help here.
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see firstname.lastname@example.org with any questions
Missed Newton approvals, moving to Ocata.
Moving to Lee initially based on a discussion we had today. This feature and issues related to it, among others, led to discussion of a significant update/rewrite of the way Cinder and Nova work together to attach volumes at the Barcelona OpenStack summit:
The long and the short of it is there is quite a lot of work to do here, I would like to keep this on our RHOSP 11/Ocata work list to ensure forward progress but am not confident of 100% completion of the end to end feature in that time frame.
How has this been tracking upstream through Pike? How much of it remains outstanding?