Bug 1291286 - NFS client support for virtio-vsock
Summary: NFS client support for virtio-vsock
Keywords:
Status: ASSIGNED
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: nfs-utils
Version: 7.3
Hardware: All
OS: Linux
medium
medium
Target Milestone: rc
: 7.6
Assignee: Steve Dickson
QA Contact: Yongcheng Yang
URL:
Whiteboard:
: 1294879 (view as bug list)
Depends On: 1294880 1378137 1518996 1291282 1291284 1315822 1382695
Blocks: 1363787 1444027 1294884 1415819 1518995 1518997
TreeView+ depends on / blocked
 
Reported: 2015-12-14 14:01 UTC by John Spray
Modified: 2019-07-26 21:34 UTC (History)
16 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1518995 (view as bug list)
Environment:
Last Closed:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Bugzilla 1294880 None None None 2019-08-07 03:37:59 UTC
Red Hat Bugzilla 1294884 None CLOSED Support for Virt-FS (via NFS + virtio-vsock) (libvirt) 2019-08-07 03:37:59 UTC

Internal Links: 1294880 1294884

Description John Spray 2015-12-14 14:01:29 UTC
Description of problem:

To enable vm guests to mount NFS filesystems over the new VSOCK socket type, patches to nfs-utils are needed[1].  Once these are upstream, we should backport to rhel 7.3.


1. https://github.com/stefanha/nfs-utils

Comment 2 Steve Whitehouse 2015-12-15 15:21:45 UTC
This has appeared on the layered products list for the RHEL meeting. It would be good to have an estimate of the priority here... I've set it as medium until we hear otherwise.

If I've understood the github repo, this is one reasonably small patch only. Obviously going to depend on getting this upsteam.

Comment 15 Steve Whitehouse 2016-01-07 12:50:41 UTC
So it seems that there are two main things here. Firstly Manila which is bascially a way to manage storage with RHEL OSP, and then a specific requirement for an additional NFS feature, in order to fill in one of the use cases.

My first thought is that Manila should be accessing the storage via Project Springfield. There are many common goals judging by the info at the end of the link given in comment #14 and it doesn't make sense to do this kind of thing twice over. Do you know if the Manila team are aware of, and in communication with the Project Springfield team?

With regards to the request for which this bug was opened, we can certainly look into this, but we need clear requirements. Adding a new transport is something that is likely to increase the testing matrix considerably, so that is why I'm keen to ensure that if do this, we understand exactly the reasons and why another solution is not possible.

Is there anybody on the OSP side working on the upstreaming of the patches? Are you assuming that we would do that? If so, then we need the use case info to be clear so that we can make the argument upstream for inclusion - I doubt that the patches will progress without that.

We could also suggest using GFS2 as a way to share a filesystem across VMs. There is a current limit there of 16 nodes, however there is also some work on the cluster side to try and raise that limit. Depending on how the VMs were arranged, there may also be some other issues too, but I think its worth looking at that as a possible alternative, even if we have to rule it out for some reason in due course. We may even want both, since there are pros and cons to each possible solution.

I'm very happy to have a meeting about this, if that would help to clarify things.

Comment 17 Sage Weil 2016-01-07 15:01:29 UTC
Hi Steve-

Do you have a link for Project Springfield?  I can't find a reference.

Stefan Hajnoczi (from the qemu team) is working on upstreaming all of these changes, and Matt Benjamin has been helping things along on the NFS and Ganesha side of things.

Although Manila is the motivating product use-case for this, I think it's a bit distracting to focus on that.  The core problem is how to give a VM guest access to a file system on the host.  The original hope was to use virtfs/9p for this, but that was thoroughly shot by both the qemu and RHEL fs teams due to code quality.  The alternative suggestion by the RHEL team, if I remember correctly, was to use a well supported protocol (NFS) instead.  (I also seem to remember reading a suggestion to use VSOCK, but I can't remember where, and I can't see to access the filesystem-dept-list archives).  FWIW, this is also was Christoph suggested over beers at Vault last spring.

The main reason to use VSOCK instead of just IP is primarily around security and configuration: with IP, you have to attach and configure network interfaces to the VM, configure a private network on the host and guest, and set up firewall rules to ensure that the guest can only access specific services on the host.  The big problem there is that there is configuration required on the guest to set this up.  VSOCK, in contrast, is zero config from the guest perspective.. the host just assigns the guest an network id and walks away.

Happy to set up a call to discuss this.

Comment 18 J. Bruce Fields 2016-01-07 15:35:33 UTC
(In reply to Sage Weil from comment #17)
> The main reason to use VSOCK instead of just IP is primarily around security
> and configuration: with IP, you have to attach and configure network
> interfaces to the VM, configure a private network on the host and guest, and
> set up firewall rules to ensure that the guest can only access specific
> services on the host.  The big problem there is that there is configuration
> required on the guest to set this up.  VSOCK, in contrast, is zero config
> from the guest perspective.. the host just assigns the guest an network id
> and walks away.
> 
> Happy to set up a call to discuss this.

The decision on kernel support is up to Trond and the other upstream NFS developers.  The patches have been posted once or twice, and I can't recall a response from anyone but me.

We need to ensure everyone understands the above benefits well enough to weigh them against the cost of maintaining another transport and of a bigger test matrix.

Comment 19 Sayan Saha 2016-01-07 19:20:41 UTC
We absolutely need this to enable CephFS integration for OSP Manila. This is the plumbing that's necessary to create the most secure and reliable Manila back-end using CephFS.

Comment 20 Matt Benjamin (redhat) 2016-01-07 20:26:51 UTC
Thanks for the feedback, Bruce.

Comment 21 Matt Benjamin (redhat) 2016-01-08 14:26:00 UTC
(In reply to J. Bruce Fields from comment #18)
> (In reply to Sage Weil from comment #17)
> > The main reason to use VSOCK instead of just IP is primarily around security
> > and configuration: with IP, you have to attach and configure network
> > interfaces to the VM, configure a private network on the host and guest, and
> > set up firewall rules to ensure that the guest can only access specific
> > services on the host.  The big problem there is that there is configuration
> > required on the guest to set this up.  VSOCK, in contrast, is zero config
> > from the guest perspective.. the host just assigns the guest an network id
> > and walks away.
> > 
> > Happy to set up a call to discuss this.
> 
> The decision on kernel support is up to Trond and the other upstream NFS
> developers.  The patches have been posted once or twice, and I can't recall
> a response from anyone but me.
> 
> We need to ensure everyone understands the above benefits well enough to
> weigh them against the cost of maintaining another transport and of a bigger
> test matrix.

My intuition is that this feature has more in common with (here, proposed) Linux networking infrastructure changes like containerization, where NFS integration never seemed controversial.

It's almost an exaggeration to call VSOCK a new transport, since if we restrict things to cli-cots-ord (what I did for Ganesha), there's almost no behavioral difference from NFS over TCP.

Perhaps it would be helpful to widen the conversation to more Red Hat kernel networking folks for feedback?

Matt

Comment 22 Steve Whitehouse 2016-01-12 14:08:16 UTC
Well, I think it would be a good plan to include the networking team in this too. There may be no behavioural difference between this and TCP, however it will increase the test matrix, and without some use cases, we won't know what to test in order to ensure that it does the intended job. So we just want to be very clear on the reasons for doing it, in order to ensure that we deliver the right solution in due course.

Regarding Project Springfield, it does need to be more visible. I'll follow up again shortly on that side of things. It is an internal project and has focused largely on just the storage end of things thus far, but it was always intended to cover filesystems too, and with several groups around Red Hat who all need something similar, I think it makes sense to combine efforts to make best use of resources.

Comment 35 Ademar Reis 2016-03-15 13:52:34 UTC
*** Bug 1294879 has been marked as a duplicate of this bug. ***

Comment 36 Ademar Reis 2016-03-15 13:59:26 UTC
NFS support for virtio-vsock has plenty of dependencies. Although we're making good progress upstream, I don't think anybody should expect it to be part of RHEL-7.3, at least not in full-supported form.

I'm adding all dependencies to this BZ. Unfortunately I was tracking a different one, which I've closed as duplicated.

I'm also changing the BZ title to remove the RHEL-7.3 reference and make it more consistent with the other BZs tracking the depencies.


Note You need to log in before you can comment on or make changes to this bug.