Bug 1855789 - RFE: support for externally launched virtiofsd
Summary: RFE: support for externally launched virtiofsd
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: libvirt
Version: 8.2
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: rc
: 8.3
Assignee: Ján Tomko
QA Contact: yafu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-10 13:53 UTC by Cole Robinson
Modified: 2021-12-06 07:41 UTC (History)
17 users (show)

Fixed In Version: libvirt-7.4.0-1.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-11-16 07:50:50 UTC
Type: Feature Request
Target Upstream Version: 7.4.0
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2021:4684 0 None None None 2021-11-16 07:51:20 UTC

Description Cole Robinson 2020-07-10 13:53:51 UTC
From: https://bugzilla.redhat.com/show_bug.cgi?id=1854595#c5

Roman Mohr from comment #5:
> By the way, in our case it would make a lot of sense to also run virtiofsd
> standalone in a separate container  and only share the socket to the
> container where libvirt and qemu are running. Would it make sense to allow
> libvirt to connect qemu to a prestarted virtiofsd? While I don't think it is
> too critical from a security perspective, as written before, it would harden
> the virtiofsd setup a little bit more if it does not do its unsharing. If we
> can run it in a separate container it would run in completely separate
> namespaces, except for the networking namespace.

We already have some similarish precedent with vhost-user net:

  <interface type='vhostuser'>
    <mac address='52:54:00:3b:83:1a'/>
    <source type='unix' path='/tmp/vhost1.sock' mode='server'/>
    <model type='virtio'/>
  </interface>

We should consider adding the same for virtiofs

Comment 2 Roman Mohr 2020-07-21 11:03:33 UTC
This would be very helpful for CNV. It would allow us to run the server in its own mount namespace without needing extra privileges on the user-side that virtiofsd could do such  transition on its own (what it does right now). We would basically start virtiofd in our Pod  with the VM in its own completely unprivileged namespace where only mounts  are present which we want to pass through.

Comment 4 Ján Tomko 2021-03-31 18:56:35 UTC
Proposed upstream patches:
https://listman.redhat.com/archives/libvir-list/2021-March/msg01461.html

Comment 5 Ján Tomko 2021-04-27 17:15:49 UTC
Merged as of:
commit eacf8978e94f73535676f40c70d4be8058d6b839
Author:     Ján Tomko <jtomko>
CommitDate: 2021-04-27 19:08:09 +0200

    docs: virtiofs: add section about externally-launched virtiofsd
    
    Provide an exmple in a place more visible than formatdomain.html.
    
    Signed-off-by: Ján Tomko <jtomko>
    Reviewed-by: Jonathon Jongsma <jjongsma>

git describe: v7.3.0-rc1-5-geacf8978e9

See https://libvirt.org/kbase/virtiofs.html#externally-launched-virtiofsd for an example.

Comment 9 yafu 2021-05-20 02:38:05 UTC
Hi Ján,

I tested this bug with libvirt-daemon-7.3.0-1.module+el8.5.0+11004+f4810536.x86_64. And guest failed to start when using externally-launched virtiofsd in guest. The error info is "Unable to find a satisfying virtiofsd".

I also tested this bug with upstream build before and it works well. The upstream version i used is:
# git describe 
v7.2.0-365-g509d9b5b9f

Test steps:
1.Edit guest to add a externally-launched virtiofsd filesystem in guest:
#virsh edit vm1
...
<filesystem type='mount'>
      <driver type='virtiofs' queue='1024'/>
      <source socket='/var/run/virtiofsd.sock'/>
      <target dir='mount_tag1'/>
      <address type='pci' domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/>
    </filesystem>
...

2.Start the guest:
#virsh start guest
# virsh start vm1
error: Failed to start domain 'vm1'
error: operation failed: Unable to find a satisfying virtiofsd

3.In step 2, it should report error such as: "Failed to connect to '/var/run/virtiofsd.sock': No such file or directory"

4.After starting a virtiofsd process manually and change the ownership of the socket file, it also reported error "Unable to find a satisfying virtiofsd“ when trying to start guest.

Would you help to check it please? Thanks.

Comment 10 Ján Tomko 2021-05-20 09:13:14 UTC
It works with the upstream version, because qemu there uses the correct path for the vhost-user json files:
https://bugzilla.redhat.com/show_bug.cgi?id=1804196

However, libvirt should not even be looking for binary paths if it's not going to need them.
Patch sent upstream:
https://listman.redhat.com/archives/libvir-list/2021-May/msg00585.html

Comment 12 Ján Tomko 2021-05-27 08:24:13 UTC
Pushed upstream as:
commit 015fe0439f0592ca0b0274b306258a1e7aafe43c
Author:     Ján Tomko <jtomko>
CommitDate: 2021-05-20 16:27:21 +0200

    qemu: fs: do not try to fill binary path if we have a socket
    
    We do not need to look for a suitable binary in the vhost-user
    description files, if we aren't the ones starting it.
    Otherwise startup will fail with:
    
    error: Failed to start domain 'vm1'
    error: operation failed: Unable to find a satisfying virtiofsd
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1855789
    
    Signed-off-by: Ján Tomko <jtomko>
    Reviewed-by: Michal Privoznik <mprivozn>

git describe: v7.3.0-245-g015fe0439f contains: v7.4.0-rc1~68

Comment 14 yafu 2021-06-08 02:26:17 UTC
Test with libvirt-daemon-7.4.0-1.module+el8.5.0+11218+83343022.x86_64.

Test steps:
1.Start a virtiofsd process:
#/usr/libexec/virtiofsd --socket-path=/var/run/vm001-vhost-fs.sock -o source=/var/lib/fs/vm001

2.Change the ownership of socket file:
#chown qemu:qemu /var/run/vm001-vhost-fs.sock

3.Workaround for blocker bug (Bug 1966842 - SELinux is preventing virtlogd from 'read, append' accesses on the file system.token):
#setenforce 0
#systemctl restart libvirtd
#systemctl restart virtlogd

4.Edit guest to add a externally-launched virtiofsd filesystem in guest:
#virsh edit vm1
...
<filesystem type='mount'>
      <driver type='virtiofs' queue='1024'/>
      <source socket='/var/run/virtiofsd.sock'/>
      <target dir='mount_tag1'/>
      <address type='pci' domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/>
    </filesystem>
...

5.Start guest:
#virsh start vm1
Domain 'vm1' started

6.Login guest os and mount the dir:
(guest os)#mount -t virtiofs mount_tag1 /mnt
(guest os)#cd /mnt && touch testfile

7.Edit guest vm1 with a non-existing virtiofs socket or starting guest with no permission socket file, both report correct error info.

Comment 15 yafu 2021-06-22 08:57:13 UTC
Verified with libvirt-daemon-7.4.0-1.module+el8.5.0+11218+83343022.x86_64.

Test steps:
1. set virtd_exec_t on qemu-storage-daemon:
   #chcon -t virtd_exec_t /usr/libexec/virtiofsd

2.Create the shared dir:
  #mkdir -p /var/lib/fs/vm001

3.run qemu-storage-daemon using systemd-run:
  #systemd-run /usr/libexec/virtiofsd --socket-path=/vm001-vhost-fs.sock -o source=/var/lib/fs/vm001
   Running as unit: run-r8ac82b1258df4c208ee74f7be3f00f7a.service

4.relabel the created socket
  #chcon -t svirt_image_t /vm001-vhost-fs.sock

5.Change ownership of the socket file:
  #chown qemu:qemu /vm001-vhost-fs.sock

6.Edit guest to add a externally-launched virtiofsd filesystem in guest:
...
<filesystem type='mount'>
      <driver type='virtiofs' queue='1024'/>
      <source socket='/vm001-vhost-fs.sock'/>
      <target dir='mount_tag1'/>
    </filesystem>
...

7.Start guest:
#virsh start vm1

8.Login guest os and mount the dir:
(guest os)#mount -t virtiofs mount_tag1 /mnt
(guest os)#cd /mnt && touch testfile
(guest os)#ls /mnt
 testfile

9.Edit guest vm1 with a non-existing virtiofs socket or starting guest with no permission socket file, both report correct error info.

9.Start second guest with the same externally-launched virtiofsd, expected error report:
# virsh start avocado-vt-vm1 
error: Failed to start domain 'avocado-vt-vm1'
error: internal error: process exited while connecting to monitor: 2021-06-22T08:56:20.699922Z qemu-kvm: -chardev socket,id=chr-vu-fs0,path=/vm001-vhost-fs.sock: Failed to connect to '/vm001-vhost-fs.sock': Connection refused

Comment 17 errata-xmlrpc 2021-11-16 07:50:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (virt:av bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:4684


Note You need to log in before you can comment on or make changes to this bug.