Bug 1363787

Summary: RFE: virtio-vsock support for OpenStack Manila
Product: Red Hat OpenStack Reporter: Sayan Saha <ssaha>
Component: openstack-manilaAssignee: Tom Barron <tbarron>
Status: CLOSED WONTFIX QA Contact: vhariria
Severity: low Docs Contact: Don Domingo <ddomingo>
Priority: low    
Version: 13.0 (Queens)CC: anharris, ceph-eng-bugs, flucifre, gouthamr, hannsj_uhl, nchandek, pdonnell, sandyada, scohen, tbarron, vhariria, yafu
Target Milestone: ---Keywords: FutureFeature
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1415819 (view as bug list) Environment:
Last Closed: 2020-02-07 22:29:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1291282, 1291284, 1291286, 1291851, 1294880, 1294884, 1378137, 1382695, 1427553, 1464362, 1464390, 1470203, 1470219, 1479877, 1518995, 1518996, 1518997    
Bug Blocks: 1415819    

Description Sayan Saha 2016-08-03 14:43:48 UTC
This is a high level RFE explaining the need for virtio-vsock enablement in RHEL. This RFE will also serve as a tracker for all the virtio-vsock related work that needs to happen and is being requested from the broader team.

The motivation for this originates with CephFS and OpenStack Manila. In an openstack environment, we already have Ceph RBD + Cinder for exposing images to VMs directly.  The need we have is specifically for dynamically provisioned shared filesystems.  We think the best way for us to expose CephFS to OpenStack guest VMs is by running the ceph client on the hypervisor (inside an nfs ganesha daemon), and exposing the filesystem into the guest via NFS over VSOCK.  Compared with using TCP/IP NFS gateways, the key advantages are:
 * Security: guests don't need any extra TCP/IP connectivity to access the shared filesystem
 * Simplicity: we don't have to spin up HA pairs of virtual machines to act as NFS gateways to virtual machines
 * Scalability: rather than having to independently scale a cluster of NFS servers for accessing the Ceph filesystem, we get a natural scaling as we have one NFS server per hypervisor.

Sage's talk from openstack summit talks about the pros and cons of various approaches:
https://www.youtube.com/watch?v=dNTCBouMaAU ("Better FS plumbing" from 18:50)

There's a thread here which brings together the various components of this (Bruce, you were CC'd):
http://www.spinics.net/lists/ceph-devel/msg26797.html

Comment 4 Ademar Reis 2017-08-04 14:57:24 UTC
I'm adding all RHEL BZs related to virtio-vsock to this tracker. Some of them are nice-to-have features (such as support for vsock in Wireshark, useful for troubleshooting).

Comment 5 Tom Barron 2017-09-07 21:27:14 UTC
*** Bug 1415819 has been marked as a duplicate of this bug. ***

Comment 11 Ademar Reis 2020-02-07 22:29:17 UTC
The use-case that required this is now covered by virtio-fs.

For more about virtio-fs, please refer to this BZ (and its dependencies): bug 1694164