Bug 1510037
| Summary: | Add a command to dump vhost-user internal details. | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Flavio Leitner <fleitner> |
| Component: | openvswitch | Assignee: | Flavio Leitner <fleitner> |
| Status: | CLOSED ERRATA | QA Contact: | Christian Trautman <ctrautma> |
| Severity: | medium | Docs Contact: | |
| Priority: | high | ||
| Version: | 7.4 | CC: | atheurer, atragler, ctrautma, fleitner, pvauter, qding, tli, tredaelli |
| Target Milestone: | rc | Keywords: | FutureFeature, RFE |
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | openvswitch-2.9.0-1.el7fdp | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2018-03-19 10:22:13 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1475436 | ||
|
Description
Flavio Leitner
2017-11-06 15:08:20 UTC
* Patch posted here: https://mail.openvswitch.org/pipermail/ovs-dev/2017-November/340596.html * Test RPM with the patch applied: http://file.rdu.redhat.com/fleitner/vhu-show-info/ Example of output:
[root@wsfd-netdev78 ~]# ovs-appctl vhostuser/show
vhost-user port: vm-rhel-1
Mode: client
Socket: /tmp/openvswitch/vm-rhel-1
Status: Connected
Negotiated features: 0x150208182
NUMA: 0
Number of vrings: 2
Vring 0:
Descriptor length: 4096
Ring size: 256
Vring 1:
Descriptor length: 32
Ring size: 256
vhost-user port: vm-rhel-2
Mode: client
Socket: /tmp/openvswitch/vm-rhel-2
Status: Connected
Negotiated features: 0x150208182
NUMA: 0
Number of vrings: 2
Vring 0:
Descriptor length: 4096
Ring size: 256
Vring 1:
Descriptor length: 32
Ring size: 256
vhost-user port: vhu666
Mode: client
Status: Disconnected
# ovs-vsctl show
d2dd7d0a-877b-4838-9b33-8ae129b328ee
Bridge "ovsbr0"
Port "ovsbr0"
Interface "ovsbr0"
type: internal
Port "vhu666"
Interface "vhu666"
type: dpdkvhostuserclient
options: {vhost-server-path="/tmp/openvswitch.vhu666"}
Port "vm-rhel-2"
Interface "vm-rhel-2"
type: dpdkvhostuserclient
options: {vhost-server-path="/tmp/openvswitch/vm-rhel-2"}
Port "vm-rhel-1"
Interface "vm-rhel-1"
type: dpdkvhostuserclient
options: {vhost-server-path="/tmp/openvswitch/vm-rhel-1"}
ovs_version: "2.8.90"
Question...I have two vhost user ports connected to a guest. Yet only one will provide valid info. The other is reporting as disconnected. Is this normal??
[root@netqe16 ~]# ovs-vsctl get Interface vhost1 status
{mode=server, status=disconnected}
[root@netqe16 ~]# ovs-vsctl get Interface vhost0 status
{features="0x0000000050008000", mode=server, num_of_vrings="2", numa="1", socket="/var/run/openvswitch/vhost0", status=connected, "vring_0_size"="256", "vring_1_size"="256"}
[root@netqe16 ~]# ovs-vsctl show
89c1caa3-1162-4847-bdd9-7a861581c665
Bridge "ovsbr0"
Port "dpdk0"
Interface "dpdk0"
type: dpdk
options: {dpdk-devargs="0000:84:00.0"}
Port "vhost1"
Interface "vhost1"
type: dpdkvhostuser
Port "ovsbr0"
Interface "ovsbr0"
type: internal
Port "vhost0"
Interface "vhost0"
type: dpdkvhostuser
Port "dpdk1"
Interface "dpdk1"
type: dpdk
options: {dpdk-devargs="0000:84:00.1"}
ovs_version: "2.9.0"
[root@netqe16 ~]# virsh console guest30032
Connected to domain guest30032
Escape character is ^]
testpmd> show port info all
********************* Infos for port 0 *********************
MAC address: 52:54:00:11:8F:E9
Driver name: net_virtio
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: 10000 Mbps
Link duplex: full-duplex
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 64
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
strip off
filter off
qinq(extend) off
No flow type is supported.
Max possible RX queues: 1
Max possible number of RXDs per queue: 65535
Min possible number of RXDs per queue: 0
RXDs number alignment: 1
Max possible TX queues: 1
Max possible number of TXDs per queue: 65535
Min possible number of TXDs per queue: 0
TXDs number alignment: 1
********************* Infos for port 1 *********************
MAC address: 52:54:00:11:8F:E8
Driver name: net_virtio
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: 10000 Mbps
Link duplex: full-duplex
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 64
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
strip off
filter off
qinq(extend) off
No flow type is supported.
Max possible RX queues: 1
Max possible number of RXDs per queue: 65535
Min possible number of RXDs per queue: 0
RXDs number alignment: 1
Max possible TX queues: 1
Max possible number of TXDs per queue: 65535
Min possible number of TXDs per queue: 0
TXDs number alignment: 1
testpmd>
(In reply to Christian Trautman from comment #8) > Question...I have two vhost user ports connected to a guest. Yet only one > will provide valid info. The other is reporting as disconnected. Is this > normal?? No, it is not. Could you not use testpmd inside of the guest and use plain kernel interfaces instead? Just bring them up and see if the host side is okay again. Tested without binding inside guest and both ports showed status correctly.
[root@netqe16 ~]# ovs-vsctl get Interface vhost1 status
{features="0x0000000050208182", mode=server, num_of_vrings="2", numa="1", socket="/var/run/openvswitch/vhost1", status=connected, "vring_0_size"="256", "vring_1_size"="256"}
[root@netqe16 ~]# ovs-vsctl get Interface vhost0 status
{features="0x0000000050208182", mode=server, num_of_vrings="2", numa="1", socket="/var/run/openvswitch/vhost0", status=connected, "vring_0_size"="256", "vring_1_size"="256"}
[root@netqe16 ~]#
So the issue above is just something with TestPMD. Not relevant to this bug. I'll look into the TestPMD issue when time permits separately.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0550 |