This is a high level RFE explaining the need for virtio-vsock enablement in RHEL. This RFE will also serve as a tracker for all the virtio-vsock related work that needs to happen and is being requested from the broader team. The motivation for this originates with CephFS and OpenStack Manila. In an openstack environment, we already have Ceph RBD + Cinder for exposing images to VMs directly. The need we have is specifically for dynamically provisioned shared filesystems. We think the best way for us to expose CephFS to OpenStack guest VMs is by running the ceph client on the hypervisor (inside an nfs ganesha daemon), and exposing the filesystem into the guest via NFS over VSOCK. Compared with using TCP/IP NFS gateways, the key advantages are: * Security: guests don't need any extra TCP/IP connectivity to access the shared filesystem * Simplicity: we don't have to spin up HA pairs of virtual machines to act as NFS gateways to virtual machines * Scalability: rather than having to independently scale a cluster of NFS servers for accessing the Ceph filesystem, we get a natural scaling as we have one NFS server per hypervisor. Sage's talk from openstack summit talks about the pros and cons of various approaches: https://www.youtube.com/watch?v=dNTCBouMaAU ("Better FS plumbing" from 18:50) There's a thread here which brings together the various components of this (Bruce, you were CC'd): http://www.spinics.net/lists/ceph-devel/msg26797.html
I'm adding all RHEL BZs related to virtio-vsock to this tracker. Some of them are nice-to-have features (such as support for vsock in Wireshark, useful for troubleshooting).
*** Bug 1415819 has been marked as a duplicate of this bug. ***
The use-case that required this is now covered by virtio-fs. For more about virtio-fs, please refer to this BZ (and its dependencies): bug 1694164