Bug 1841017
Summary: | RFE: Allow IP over InfiniBand (IPoIB) device to be configured | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 8 | Reporter: | Dominik Holler <dholler> | ||||
Component: | nmstate | Assignee: | Gris Ge <fge> | ||||
Status: | CLOSED ERRATA | QA Contact: | Mingyu Shi <mshi> | ||||
Severity: | high | Docs Contact: | |||||
Priority: | high | ||||||
Version: | 8.3 | CC: | arusakov, ferferna, fge, jiji, jishi, mtessun, network-qe, till | ||||
Target Milestone: | rc | Keywords: | FutureFeature, Triaged | ||||
Target Release: | 8.4 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2021-05-18 15:16:54 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1856279, 1875967, 1894575 | ||||||
Attachments: |
|
Description
Dominik Holler
2020-05-28 07:12:41 UTC
Andrei, can you please share a verbal description of the Infiniband scenario you like to configure? Hi Dominik, Currently we are using oVirt 4.3 in our PROD. 1 or 10G is DATA network and IB is Storage network (iSCSI and NFS). This part is working fine in 4.3 and 4.4 We are using IB network as VM migration network - it is working perfectly in 4.3. 4.4 - not wokring as we can't add oVirt network to IB adapter/bond. IB VM migration is critical as we have some servers with 1G, and VM migration (for example in case of host software upgrade) can bу a huge problem (with VM 32Gb+ is close to impossible). And still we are dreaming about oVirt IB pkey support for Vms (cheap low latency VM connection ...) Thanks Andrei, you feedback helps to push things in a good direction. (In reply to Andrei from comment #5) > Hi Dominik, > > Currently we are using oVirt 4.3 in our PROD. > 1 or 10G is DATA network and IB is Storage network (iSCSI and NFS). > This part is working fine in 4.3 and 4.4 > > We are using IB network as VM migration network - it is working perfectly in > 4.3. 4.4 - not wokring as we can't add oVirt network to IB adapter/bond. > > IB VM migration is critical as we have some servers with 1G, and VM > migration (for example in case of host software upgrade) can bу a huge > problem (with VM 32Gb+ is close to impossible). Which network configuration requires this for your example? Do you use a bond of IB devices, maybe even in combination with SR-IOV? Do you want to assign an IP address to infininband devices (IPoIB)? Do you want to set MTU explicitly? Are you using datagram mode only, or are other modes relevant, too? > > And still we are dreaming about oVirt IB pkey support for Vms (cheap low > latency VM connection ...) Is there a way wo work around this by assigning a SR-IOV VF to a VM? Hi Dominik,
> Which network configuration requires this for your example?
Sorry, didn't get this.
SR-IOV / VF can be a solution but with huge drawback - RDMA is getting up to 4G per instance, and for small VM it can be critical.
So we don't use them.
We use Bond (single device 2 ports connected to to switches) in connected mode and IPoIB (usually we assign IP before adding bond to oVirt network).
Regurding MTU - we set it during oVirt netwrok creation - 65520. But there is a small problem - as default IB mode is datagram - the resul MTU is 1500 (the workaround is to adjust bond config file).
It will be greate to add IB mode (datagram / connect) selection to GUI.
For nmstate, the feature patch has been posted to upstream. The desire state will be: ```yml - name: mlx5_ib1.8013 type: infiniband state: up infiniband: base-iface: mlx5_ib1 mode: datagram pkey: '0x8013' ``` The IP over InfiniBand interface cannot be a bridge port as they don't have ethernet layer. The IP over InfiniBand interface can only be bond port when bond is in active-backup mode which is limited by linux kernel. The RoCE(RDMA over Converged Ethernet) does not have above restriction. The nmstate will treat the RoCE interfaces as normal ethernet interface, no code need to be amended. The SR-IOV of InfiniBand require vendor specific tool, for example `mstconfig -d 83:00.0 set SRIOV_EN=1 NUM_OF_VFS=5`. Hence nmstate cannot support configuring the SR-IOV of InfiniBand yet. (In reply to Gris Ge from comment #9) > The SR-IOV of InfiniBand require vendor specific tool, for example > `mstconfig -d 83:00.0 set SRIOV_EN=1 NUM_OF_VFS=5`. > > Hence nmstate cannot support configuring the SR-IOV of InfiniBand yet. Please handle SR-IOV the same way like Ethernet, because this would enable Infiniband NICs to be supported, e.g. https://docs.mellanox.com/pages/viewpage.action?pageId=12013542#SingleRootIOVirtualization(SRIOV)-ConfiguringSR-IOVforConnectX-4/Connect-IB/ConnectX-5(InfiniBand) Upstream has merged the patch. The infiniband over IP is supported including pkey, SR-IOV interfaces and bond. Created attachment 1728293 [details]
verification
Verified with versions:
nmstate-0.4.1-2.el8.noarch
nispor-0.6.1-2.el8.x86_64
NetworkManager-1.28.0-0.1.el8.x86_64
DISTRO=RHEL-8.4.0-20201103.d.0
Linux rdma05.rhts.eng.pek2.redhat.com 4.18.0-241.el8.dt1.x86_64 #1 SMP Mon Nov 2 08:24:31 EST 2020 x86_64 x86_64 x86_64 GNU/Linux
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (nmstate bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:1748 |