RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1291851 - support for virtio-vsock - libvirt
Summary: support for virtio-vsock - libvirt
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.3
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: 7.6
Assignee: Ján Tomko
QA Contact: yafu
URL:
Whiteboard:
Depends On: 1291282 1291284 1315822 1378137 1382695 1584011 1591105
Blocks: 1294884 1363787 1444027 1518997 1558125
TreeView+ depends on / blocked
 
Reported: 2015-12-15 17:39 UTC by Ademar Reis
Modified: 2019-04-15 03:57 UTC (History)
20 users (show)

Fixed In Version: libvirt-4.5.0-1.el7
Doc Type: Enhancement
Doc Text:
Clone Of: 1291284
Environment:
Last Closed: 2018-10-30 09:49:43 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Ademar Reis 2015-12-15 17:39:58 UTC
+++ This bug was initially created as a clone of Bug #1291284 +++

Description of problem:

To enable VSOCK support in 7.3, we will need the userspace qemu changes that correspond to the kernel changes in https://bugzilla.redhat.com/show_bug.cgi?id=1291282

These are currently a work in progress[1] but should be picked from upstream qemu when merged.

1. https://github.com/stefanha/qemu/tree/vsock

Comment 3 Stefan Hajnoczi 2016-09-21 14:55:52 UTC
Please note that virtio-vsock is now available in upstream Linux and QEMU.  If you have any questions about the feature, please let me know so we can discuss it.

Basic "getting started" information is available here: http://qemu-project.org/Features/VirtioVsock

Comment 4 Stefan Hajnoczi 2016-10-20 13:26:10 UTC
There is now a copr repository so you can install the kernel, qemu-kvm, and nc-vsock utility on Fedora 24:
https://copr.fedorainfracloud.org/coprs/stefanha/vsock/

Note that the qemu-ga (guest agent) in these RPMs has AF_VSOCK support.  Use qemu-ga -m vsock-listen -p 3:1234 to listen on port 1234 (assuming the guest CID is 3 on QEMU's command-line -device vhost-vsock-pci,guest-cid=3 option).

Comment 6 Stefan Hajnoczi 2017-11-09 15:47:36 UTC
Hi Ján,
I suggest the following CID allocation approach:

libvirtd instances manage a CID address range from which CIDs can be allocated automatically for guests.  This is similar to DHCP ranges and it partitions the address space, allowing some ranges to be statically assigned or even owned by other libvirtd sessions without collisions.

Users should also be able to statically assign CIDs to guests.  This is useful if a user wants a persistent CID for a specific guest.

The driver interface works as follows:

  vhostfd = open("/dev/vhost-vsock")
  uint64_t guest_cid = ...;
  ioctl(vhostfd, VHOST_VSOCK_SET_GUEST_CID, &guest_cid);

The ioctl fails with EADDRINUSE if another vhost-vsock instance already has the CID assigned.  The ioctl is idempotent and succeeds if you assign the same CID again.  It fails with EINVAL if the CID is invalid (<2 or >=UINT32_MAX).

Once the CID has been set it's time to launch QEMU:

  -device vhost-vsock-pci,vhostfd=<fd>,guest-cid=<guest-cid>

(Or using device_add hotplug.)

Note that it's necessary to pass the guest CID to QEMU, but that should be easy because libvirt already knows it.

Linux 4.13 and later have a static device number assigned to /dev/vhost-vsock so that the kernel auto-loads vhost_vsock.ko when the device node is opened for the first time.  Older Linux kernels require an explicit modprobe vhost-vsock (yes, just like /dev/vhost-net used to!).

Hope this is useful info for starting the libvirt work.  Please let me know if you have questions!

Comment 8 Ján Tomko 2018-05-21 15:39:13 UTC
Initial version of upstream patches:
https://www.redhat.com/archives/libvir-list/2018-May/msg01517.html

Comment 10 Ján Tomko 2018-05-30 06:39:28 UTC
Pushed upstream as of:
commit b8b42ca036adbfaac1741c8efe389cd1403e220b
Author:     Ján Tomko <jtomko>
AuthorDate: 2018-05-22 15:57:47 +0200
Commit:     Ján Tomko <jtomko>
CommitDate: 2018-05-29 15:42:04 +0200

    qemu: add support for vhost-vsock-pci
    
    Create a new vsock endpoint by opening /dev/vhost-vsock,
    set the requested CID via ioctl (or assign a free one if auto='yes'),
    pass the file descriptor to QEMU and build the command line.
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1291851
    Signed-off-by: Ján Tomko <jtomko>

git describe: v4.3.0-372-gb8b42ca036

Comment 11 Ján Tomko 2018-05-30 07:25:52 UTC
Filed a bug against selinux-policy:
https://bugzilla.redhat.com/show_bug.cgi?id=1584011

Comment 12 Ján Tomko 2018-05-30 14:58:27 UTC
Follow-up series implementing hotplug:
https://www.redhat.com/archives/libvir-list/2018-May/msg02239.html

Comment 13 Ján Tomko 2018-06-01 11:37:14 UTC
And another series changing the element name from <source cid=''> to <cid address=''>:
https://www.redhat.com/archives/libvir-list/2018-June/msg00037.html

Comment 14 Ján Tomko 2018-06-01 12:34:25 UTC
The rename is now pushed:
commit 023ea2a86938a6ecb5323a561f422c4951c8bf39
Author:     Ján Tomko <jtomko>
AuthorDate: 2018-06-01 13:22:56 +0200
Commit:     Ján Tomko <jtomko>
CommitDate: 2018-06-01 14:31:19 +0200

    conf: rename <vsock><source> to <vsock><cid>
    
    To avoid the <source> vs. <target> confusion,
    change <source auto='no' cid='3'/> to:
    <cid auto='no' address='3'/>
    
    Signed-off-by: Ján Tomko <jtomko>
    Suggested-by: Daniel P. Berrangé <berrange>
    Acked-by: Peter Krempa <pkrempa>
    Reviewed-by: Daniel P. Berrangé <berrange>

git describe: v4.4.0-rc1-12-g023ea2a869

Comment 15 Ján Tomko 2018-06-05 06:36:55 UTC
There was one more follow-up change before the release:
commit 8a7003f66944721ec391e13e65bbc5fdfdec3cea
Author:     Ján Tomko <jtomko>
CommitDate: 2018-06-04 21:42:40 +0200

    qemu: check for QEMU_CAPS_DEVICE_VHOST_VSOCK
    
    My commit b8b42ca added support for formatting the vsock
    command line without actually checking if it's supported.
    
    Add it to the per-device validation function.
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1291851
    
    Reported-by: John Ferlan <jferlan>
    Signed-off-by: Ján Tomko <jtomko>
    Reviewed-by: Jiri Denemark <jdenemar>

git describe: v4.4.0-rc2-4-g8a7003f669 contains: v4.4.0~1

Backport of the hotplug series:
http://post-office.corp.redhat.com/archives/rhvirt-patches/2018-June/msg00055.html

Comment 17 yafu 2018-06-14 10:26:55 UTC
Hi, Ján,

It reports error when i try to hotplug/coldplug vsock device to the guest with libvirt-4.4.0-2.el7.x86_64. Could you help to check whether the patch http://post-office.corp.redhat.com/archives/rhvirt-patches/2018-June/msg00055.html includes in the libvirt-4.4.0-2.el7.x86_64 or not please?
Thanks a lot.

Test steps:
1.Start a guest:
#virsh start iommu1

2.Prepare vsock device xml:
#cat vsock.xml
<vsock model='virtio'>
   <cid auto='yes'/>
</vsock>

3.Hotplug the vsock device to the guest:
#virsh attach-device iommu1 vsock.xml
error: Failed to attach device from /root/vsock.xml
error: Operation not supported: live attach of device 'vsock' is not supported

4.Coldplug the vsock device to the guest:
#virsh attach-device iommu1 vsock.xml --config
error: Failed to attach device from /root/vsock.xml
error: Operation not supported: persistent attach of device 'vsock' is not supported

Comment 18 Ján Tomko 2018-06-14 10:53:28 UTC
Hi,

libvirt-4.4.0-2.el7.x86_64 does not have the additional hotplug patches. Only what was picked up by rebase to 4.4.0.

Sorry for the confusion.

Comment 19 yafu 2018-08-14 11:27:07 UTC
Verified with libvirt-4.5.0-6.el7.x86_64 and qemu-kvm-rhev-2.12.0-10.el7.x86_64.

Test steps:
Scenario 1: Start a guest with vsock device:
1).Add vsock model on host os:
#modprobe vhost_vsock

2).Start a guest with vsock device:
#virsh start rhel7.6
Domain rhel7.6 started

#virsh dumpxml rhel7.6 | grep -A5 vsock
# virsh dumpxml rhel7.6 | grep -A5 vsock
    <vsock model='virtio'>
      <cid auto='no' address='3'/>
      <alias name='ua-04c3388d-4e33-4023-84de-a2205c777asdfdsf'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
    </vsock>

3)Check the qemu cmd line:
#ps aux | grep -i cid:
-device vhost-vsock-pci,id=ua-04c2decd-4e33-4023-84de-a2205sdfsfdfdsf,guest-cid=3,vhostfd=24,bus=pci.8,addr=0x0

4).Download nc-vosck.c both on guest and host:
#git clone https://github.com/stefanha/nc-vsock.git

5).Start listening socket inside guest:
(guest)#./nc-vsock -l 1234

6).Connect guest cid from host and input some chars:
(host)#./nc-vsock 3 1234
Test for vsock device

7).Should see the same chars input in step 6 in the terminal started in the step 4;

8).Start a guest with auto address cid, and repeat step4-7, it also works well;



Scenario 2: Start two guest with the same cid and auto='no':
1)Start a guest with cid=3 and auto='no':
#virsh start rhel7.6
Domain rhel7.6 started

#virsh dumpxml rhel7.6 | grep -A5 vsock
# virsh dumpxml rhel7.6 | grep -A5 vsock
    <vsock model='virtio'>
      <cid auto='no' address='3'/>
      <alias name='ua-04c3388d-4e33-4023-84de-a2205c777asdfdsf'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
    </vsock>

2)Start another guest with cid=3 and auto='no':
#virsh edit vm1:
<vsock model='virtio'>
      <cid auto='no' address='3'/>
      <alias name='ua-04c3388d-4e33-4023-84de-a2205c777asdfdsf'/>
</vsock>

#virsh start vm1:
error: failed to set guest cid: Address already in use


Scenario 3: Hotplug/hotunplug vsock device:
1)Prepare a vsock device xml:
#cat vsock.xml
# cat /xml/vsock.xml
<vsock model='virtio'>
      <cid auto='no' address='999'/>
   <alias name='ua-04csdfasfdcd-4e33-4023-84de-a2205c777asdfdsf'/>
</vsock>

2)Hotplug the vsock device to a runnig guest:
#virsh attach-device iommu1 /xml/vsock.xml
Device attached successfully

3)Check the live xml:
#virsh dumpxml iommu1
<vsock model='virtio'>
      <cid auto='no' address='999'/>
      <alias name='ua-04c2decd-4e33-4023-84de-a2205sdfsfdfdsf'/>
      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
</vsock>

4.Check the vsock device in the guest os:
(guest)#lspci
04:00.0 Communication controller: Red Hat, Inc. Device 1053 (rev 01)

5.Hotunplug the vsock device:
#virsh detach-device iommu1 /xml/vsock.xml

6.Check the live xml and no vsock device;

7.Check the devices in the guest os, no vsock device found;

8.Repeat step 2-7 with auto='yes', it also works well.


Also test coldplug/coldunplug vsock device, try to start guest with two vsock devices, edit vsock device with invalid cid, do migration with vsock device, start guest without vhost_vsock module in the host os, all the results are as expected.

Comment 21 errata-xmlrpc 2018-10-30 09:49:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:3113


Note You need to log in before you can comment on or make changes to this bug.