Bug 1291286
| Summary: | NFS client support for virtio-vsock | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | John Spray <john.spray> | |
| Component: | nfs-utils | Assignee: | Steve Dickson <steved> | |
| Status: | CLOSED WONTFIX | QA Contact: | Yongcheng Yang <yoyang> | |
| Severity: | medium | Docs Contact: | ||
| Priority: | medium | |||
| Version: | 7.3 | CC: | areis, bcodding, bfields, coughlan, gfarnum, hannsj_uhl, jiyin, mbenjamin, mtessun, scohen, sgordon, stefanha, steved, swhiteho, xzhou, yoyang | |
| Target Milestone: | rc | Keywords: | FutureFeature | |
| Target Release: | 7.6 | |||
| Hardware: | All | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | ||
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1518995 (view as bug list) | Environment: | ||
| Last Closed: | 2020-02-07 22:28:49 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | 1291282, 1291284, 1294880, 1315822, 1378137, 1382695, 1518996 | |||
| Bug Blocks: | 1294884, 1363787, 1415819, 1444027, 1518995, 1518997 | |||
|
Description
John Spray
2015-12-14 14:01:29 UTC
This has appeared on the layered products list for the RHEL meeting. It would be good to have an estimate of the priority here... I've set it as medium until we hear otherwise. If I've understood the github repo, this is one reasonably small patch only. Obviously going to depend on getting this upsteam. So it seems that there are two main things here. Firstly Manila which is bascially a way to manage storage with RHEL OSP, and then a specific requirement for an additional NFS feature, in order to fill in one of the use cases. My first thought is that Manila should be accessing the storage via Project Springfield. There are many common goals judging by the info at the end of the link given in comment #14 and it doesn't make sense to do this kind of thing twice over. Do you know if the Manila team are aware of, and in communication with the Project Springfield team? With regards to the request for which this bug was opened, we can certainly look into this, but we need clear requirements. Adding a new transport is something that is likely to increase the testing matrix considerably, so that is why I'm keen to ensure that if do this, we understand exactly the reasons and why another solution is not possible. Is there anybody on the OSP side working on the upstreaming of the patches? Are you assuming that we would do that? If so, then we need the use case info to be clear so that we can make the argument upstream for inclusion - I doubt that the patches will progress without that. We could also suggest using GFS2 as a way to share a filesystem across VMs. There is a current limit there of 16 nodes, however there is also some work on the cluster side to try and raise that limit. Depending on how the VMs were arranged, there may also be some other issues too, but I think its worth looking at that as a possible alternative, even if we have to rule it out for some reason in due course. We may even want both, since there are pros and cons to each possible solution. I'm very happy to have a meeting about this, if that would help to clarify things. Hi Steve- Do you have a link for Project Springfield? I can't find a reference. Stefan Hajnoczi (from the qemu team) is working on upstreaming all of these changes, and Matt Benjamin has been helping things along on the NFS and Ganesha side of things. Although Manila is the motivating product use-case for this, I think it's a bit distracting to focus on that. The core problem is how to give a VM guest access to a file system on the host. The original hope was to use virtfs/9p for this, but that was thoroughly shot by both the qemu and RHEL fs teams due to code quality. The alternative suggestion by the RHEL team, if I remember correctly, was to use a well supported protocol (NFS) instead. (I also seem to remember reading a suggestion to use VSOCK, but I can't remember where, and I can't see to access the filesystem-dept-list archives). FWIW, this is also was Christoph suggested over beers at Vault last spring. The main reason to use VSOCK instead of just IP is primarily around security and configuration: with IP, you have to attach and configure network interfaces to the VM, configure a private network on the host and guest, and set up firewall rules to ensure that the guest can only access specific services on the host. The big problem there is that there is configuration required on the guest to set this up. VSOCK, in contrast, is zero config from the guest perspective.. the host just assigns the guest an network id and walks away. Happy to set up a call to discuss this. (In reply to Sage Weil from comment #17) > The main reason to use VSOCK instead of just IP is primarily around security > and configuration: with IP, you have to attach and configure network > interfaces to the VM, configure a private network on the host and guest, and > set up firewall rules to ensure that the guest can only access specific > services on the host. The big problem there is that there is configuration > required on the guest to set this up. VSOCK, in contrast, is zero config > from the guest perspective.. the host just assigns the guest an network id > and walks away. > > Happy to set up a call to discuss this. The decision on kernel support is up to Trond and the other upstream NFS developers. The patches have been posted once or twice, and I can't recall a response from anyone but me. We need to ensure everyone understands the above benefits well enough to weigh them against the cost of maintaining another transport and of a bigger test matrix. We absolutely need this to enable CephFS integration for OSP Manila. This is the plumbing that's necessary to create the most secure and reliable Manila back-end using CephFS. Thanks for the feedback, Bruce. (In reply to J. Bruce Fields from comment #18) > (In reply to Sage Weil from comment #17) > > The main reason to use VSOCK instead of just IP is primarily around security > > and configuration: with IP, you have to attach and configure network > > interfaces to the VM, configure a private network on the host and guest, and > > set up firewall rules to ensure that the guest can only access specific > > services on the host. The big problem there is that there is configuration > > required on the guest to set this up. VSOCK, in contrast, is zero config > > from the guest perspective.. the host just assigns the guest an network id > > and walks away. > > > > Happy to set up a call to discuss this. > > The decision on kernel support is up to Trond and the other upstream NFS > developers. The patches have been posted once or twice, and I can't recall > a response from anyone but me. > > We need to ensure everyone understands the above benefits well enough to > weigh them against the cost of maintaining another transport and of a bigger > test matrix. My intuition is that this feature has more in common with (here, proposed) Linux networking infrastructure changes like containerization, where NFS integration never seemed controversial. It's almost an exaggeration to call VSOCK a new transport, since if we restrict things to cli-cots-ord (what I did for Ganesha), there's almost no behavioral difference from NFS over TCP. Perhaps it would be helpful to widen the conversation to more Red Hat kernel networking folks for feedback? Matt Well, I think it would be a good plan to include the networking team in this too. There may be no behavioural difference between this and TCP, however it will increase the test matrix, and without some use cases, we won't know what to test in order to ensure that it does the intended job. So we just want to be very clear on the reasons for doing it, in order to ensure that we deliver the right solution in due course. Regarding Project Springfield, it does need to be more visible. I'll follow up again shortly on that side of things. It is an internal project and has focused largely on just the storage end of things thus far, but it was always intended to cover filesystems too, and with several groups around Red Hat who all need something similar, I think it makes sense to combine efforts to make best use of resources. *** Bug 1294879 has been marked as a duplicate of this bug. *** NFS support for virtio-vsock has plenty of dependencies. Although we're making good progress upstream, I don't think anybody should expect it to be part of RHEL-7.3, at least not in full-supported form. I'm adding all dependencies to this BZ. Unfortunately I was tracking a different one, which I've closed as duplicated. I'm also changing the BZ title to remove the RHEL-7.3 reference and make it more consistent with the other BZs tracking the depencies. |