Description of problem: VM Network link speed shown incorrectly when 10Gig physical NIC installed to server. [root@alma04 ~]# ethtool eth0 Settings for eth0: Supported ports: [ FIBRE ] Supported link modes: 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: No Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: No Speed: 10000Mb/s Duplex: Full Port: Direct Attach Copper PHYAD: 0 Transceiver: external Auto-negotiation: off Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes [root@alma04 ~]# ethtool rhevm Settings for rhevm: Link detected: yes [root@alma04 ~]# ethtool vnet23 Settings for vnet23: Supported ports: [ ] Supported link modes: Not reported Supported pause frame use: No Supports auto-negotiation: No Advertised link modes: Not reported Advertised pause frame use: No Advertised auto-negotiation: No Speed: 10Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: internal Auto-negotiation: off MDI-X: Unknown Current message level: 0xffffffa1 (-95) drv ifup tx_err tx_queued intr tx_done rx_status pktdata hw wol 0xffff8000 Link detected: yes Version-Release number of selected component (if applicable): sanlock-2.8-1.el6.x86_64 vdsm-4.16.7.6-1.el6ev.x86_64 libvirt-0.10.2-29.el6_5.12.x86_64 qemu-kvm-rhev-0.12.1.2-2.415.el6_5.14.x86_64 rhevm-3.5.0-0.22.el6ev.noarch How reproducible: 100$ Steps to Reproduce: 1.Bring up a setup of at least 1 RHEL6.5 host with connected 10Gig nic and engine. 2.Create VM with vnic and power it up. 3.Check using ethtool physical interface of a host and compare it with vnic reported speed, also check vnic speed via WEBUI, 1Gig being shown instead of 10Gig. Actual results: For vnics 1Gig being shown instead of 10Gig. Expected results: Vnic speed should be shown as it appears in physically, just like it shown for physical 1Gig nics and vnics. Otherwise how single VM running with 1Gig vnic could possibly use all provided 10Gig physical nic? Additional info:
If checking interface speed from guest: cat /sys/class/net/ens3/speed 1000 If checking tap device on host, then speed different at all (10Mb/s): # ethtool vnet29 Settings for vnet29: Supported ports: [ ] Supported link modes: Not reported Supported pause frame use: No Supports auto-negotiation: No Advertised link modes: Not reported Advertised pause frame use: No Advertised auto-negotiation: No Speed: 10Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: internal Auto-negotiation: off MDI-X: Unknown Current message level: 0xffffffa1 (-95) drv ifup tx_err tx_queued intr tx_done rx_status pktdata hw wol 0xffff8000 Tried to use virtio and e1000, both gave the same result for real 10gig NIC. Please check that driver is fully compatible with 10Gig physical interfaces.
The physical NIC speed should not imply anything about a virtual NIC speed - the vNIC speed could be lower, higher, that's part of the point of virtualization... Currently the engine creates each VM interface with default 1Gbps speed, and this can be manipulated manually.
(In reply to Lior Vernia from comment #2) > The physical NIC speed should not imply anything about a virtual NIC speed - > the vNIC speed could be lower, higher, that's part of the point of > virtualization... Currently the engine creates each VM interface with > default 1Gbps speed, and this can be manipulated manually. Then you have to see no more bit rate than 1gig up/down per VM, but that's not true if you're checking actual speed during iperf on VM running over host with 10Gig interface. Please decide with PM on this.
The virtual interface speed is currently always reported as either 100 or 1000 Mbps. As I see it, we have three choices if we decide to "fix" it: 1. Report exactly what the kernel reports for the virtual interface, and open a bug on libvirt/kernel to set the speed smarter, i.e. move the responsibility lower down the abstraction layers. 2. Report the speed of the host interface to which the specific VM network is attached, i.e. keep the responsibility within vdsm. 3. Have the virtual interface speed be configurable from the engine, i.e. pass the responsibility to the user. 1 or 2 are vdsm fixes, 3 will require some engine implementation as well. Dan, let us know what you think.
(just adding there's another option - dropping vNIC speed reporting altogether and not displaying network usage in percentage for VMs)
Nikolai, why is the vNIC speed so important? Any report of nic "speed" should be taken with a huge grain of salt. We are working (bug 1066570) to make the network traffic reports smarter, so they do not take these fake number into account.
Dan, they will still take this fake number into account, it's just the engine that'll perform the computation. We haven't been planning to drop the percentage display.
(In reply to Dan Kenigsberg from comment #6) > Nikolai, why is the vNIC speed so important? > > Any report of nic "speed" should be taken with a huge grain of salt. We are > working (bug 1066570) to make the network traffic reports smarter, so they > do not take these fake number into account. For example for time reporting emulated environment: Some projects might be using virtualization platform for testing scale for their products, which will be using for example 10Mbps interfaces at half duplex per terminal, because of their cheap prices, while these terminals will connect to central server, all this environment might be emulated on a single virtual platform, hence speed and duplex becoming useful per VNIC.
I don't think the vNIC speed is actually used to limit its traffic, e.g. you could transmit 100Mbps on a 10Mbps vNIC. At least as far as the tap device goes, maybe the guest OS puts some limitations according to what it thinks the vNIC speed is (and we're not gonna interfere with the vNIC speed inside the OS). But I could be wrong.
(In reply to Lior Vernia from comment #10) > But I could be wrong. You are not. vNIC reported "speed" is an utterly fake number, that has nothing to do with vNIC QoS capping (which we plan to introduce only on ovirt-3.6).
Please re-open as an RFE with an explanation why it is important to match the reported vNIC speed with the speed of the host external bandwidth. The only motivation I see is to make sure that VM<->out-of-host communication is less prone to exceeds 100% of its reported speed. But please note that it can still happen, and that VM<->VM communication normally exceed external bandwidth anyway. Also note comment 1 does not discuss the vNIC speed reported by Vdsm to Engine, but the speed reported inside the guest. The latter depends on the guest driver alone, and has nothing to do with oVirt. For example e1000's is 1000, and this does not cap the bandwidth that can be squeezed through it.
*** Bug 1212301 has been marked as a duplicate of this bug. ***
Changing to wontfix due to https://bugzilla.redhat.com/show_bug.cgi?id=1168478#c20.
I was looking at libvirt to accomplish some enterprise testing this week. After searching, this ticket seems to be the best embodiment of my issue, but not perfectly. I understand the number reported by the NIC in the VM doesn't actually matter for the bandwidth that the VM is allowed to use. My use case today involves testing functionality of software that makes networking decisions based on reported NIC speed, for eventual use on physical systems, where this information would be accurate. I am not finding a way to set the QEMU virtio-net-pci "speed" attribute from libvirt. I apologize, if I did miss an existing XML setting - I did search and review the source. I'd also like to point out that in cases where the VM is communicating exclusively through a physical interface (macvtap for example), it may be desirable to inform the VM of the actual speed of the NIC it is using. libvirt makes QEMU very convenient, which is why our enterprise uses it. I know that libvirt 8.2 (which is not in el8) gave us the ability to override arbitrary QEMU properties given a device alias [1], but this WONTFIX is impacting el8 today. Since Comment 20 (as well as 15 onwards) was deleted I am not sure as to the specific reason for the WONTFIX. Should I open a new bug (actual link speed does not matter) for the ability to set the guest reported value, or request this ticket to be re-opened? [1]: Below is an example XML modification that I not yet able to test on el8: <domain type="kvm" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0"> ... <devices> ... <interface type="..."> <alias name="net0"/> ... </interface> ... </devices> ... <qemu:override> <qemu:device alias="net0"> <qemu:frontend> <qemu:property name="speed" type="unsigned" value="1000"/> </qemu:frontend> </qemu:device> </qemu:override> </domain>