The virtio_fs.ko driver provides host<->guest shared file system support in Linux. For more information, see the project website at https://virtio-fs.gitlab.io/. virtio-fs is required in RHEL to enable the following use cases: 1. Sharing data between the host and guest. 2. Booting from a directory tree on the host without the need for disk images. 3. Filesystem-as-a-service for Ceph so that guests are isolated from the storage backend (no access to storage network or distributed storage system configuration details). 4. Access to container rootfs for Kata Containers. 5. Access to PersistentVolumes in KubeVirt. Existing virtio-9p domain XML syntax can be reused for virtio-fs. QEMU must be launched with: -device vhost-user-fs-pci,chardev=char0,tag=myfs Where char0 is the vhost-user chardev connected to the virtiofsd filesystem daemon. The virtiofsd daemon is launched like this: virtiofsd -o vhost_user_socket=path/to/vhost-fs.sock -o source=path/to/shared/dir -o cache=always virtiofsd is a vhost-user device backend. It must be started as root before QEMU runs. There is 1 virtiofsd process per QEMU vhost-user-fs-pci device. For more details on how to run virtio-fs, see https://virtio-fs.gitlab.io/howto-qemu.html.
Stefan, how is this requirement different to bug 1519459 besides that this one has comprehensive comment 0? Thanks.
(In reply to Jaroslav Suchanek from comment #1) > how is this requirement different to bug 1519459 besides that this one has > comprehensive comment 0? We can reuse bz#1519459 but it seems to be referring to virtio-9p (old) instead of virtio-fs (new). This BZ is specifically about the new virtio-fs host<->guest file sharing mechanism that is currently being developed and is expected to be supportable in RHEL (virtio-9p is not!). I suggest closing the old BZ.
*** Bug 1519459 has been marked as a duplicate of this bug. ***
Hi Ján, The current code for QEMU and Linux is suitable for developing the libvirt feature even before virtio-fs is available upstream. Please see the virtio-fs website for information on how to launch it (https://virtio-fs.gitlab.io/). virtiofsd is the vhost-user device backend daemon that must run for each file system that each VM wants to access. You can read more about the architecture and security here: https://gitlab.com/virtio-fs/qemu/blob/virtio-fs-dev/contrib/virtiofsd/security.rst
An RFC version has been sent upstream: https://www.redhat.com/archives/libvir-list/2019-November/msg00005.html
Please can you indicate the exact XML syntax to select virtiofs? Adding <filesystem type='mount' accessmode='mapped'> <source dir='/mnt/nfstest'/> <target dir='/tmp/virtionfstest'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </filesystem> leads to error: internal error: qemu unexpectedly closed the monitor: 2019-11-25T21:01:20.052985Z qemu-kvm: -device virtio-9p-pci,id=fs0,fsdev=fsdev-fs0,mount_tag=/tmp/virtionfstest,bus=pci.5,addr=0x0: 'virtio-9p-pci' is not a valid device model name Thanks Eric
https://www.redhat.com/archives/libvir-list/2019-November/msg00011.html <filesystem type='mount' accessmode='passthrough'> <driver type='virtio-fs'/> <source dir='/path'/> <target dir='/path'/> </filesystem>
Jano's update for libvirt status (from Dec 16): v2 still in progress, but it should make it for 8.2.
v2: https://www.redhat.com/archives/libvir-list/2020-January/msg00980.html
v3: https://www.redhat.com/archives/libvir-list/2020-January/msg01401.html
v4: https://www.redhat.com/archives/libvir-list/2020-February/msg00707.html
v5 (already acked, waiting for the upstream feature freeze to end - ETA Mar 2nd): https://www.redhat.com/archives/libvir-list/2020-February/msg01046.html
Pushed upstream: commit 0627150a56fd53841918d558d8466feceb18552a Author: Ján Tomko <jtomko> CommitDate: 2020-03-04 12:08:50 +0100 qemu: build vhost-user-fs device command line Format the 'vhost-user-fs' device on the QEMU command line. This device provides shared file system access using the FUSE protocol carried over virtio. The actual file server is implemented in an external vhost-user-fs device backend process. https://bugzilla.redhat.com/show_bug.cgi?id=1694166 Signed-off-by: Ján Tomko <jtomko> Reviewed-by: Daniel P. Berrangé <berrange> Reviewed-by: Peter Krempa <pkrempa> Tested-by: Andrea Bolognani <abologna> commit 071a1ab92fbbd58f68fb4929d004d6155759067e Author: Ján Tomko <jtomko> CommitDate: 2020-03-04 12:08:50 +0100 qemu: use the vhost-user schemas to find binary Look into /usr/share/qemu/vhost-user to see whether we can find a suitable virtiofsd binary, in case the user did not provide one in the domain XML. Signed-off-by: Ján Tomko <jtomko> Reviewed-by: Daniel P. Berrangé <berrange> Reviewed-by: Peter Krempa <pkrempa> Tested-by: Andrea Bolognani <abologna> commit 9de5d69c218faa0e25c5d6a56ab5f6bacbd1a132 Author: Ján Tomko <jtomko> CommitDate: 2020-03-04 12:08:50 +0100 qemu: put virtiofsd in the emulator cgroup Wire up the code to put virtiofsd in the emulator cgroup on domain startup. Signed-off-by: Ján Tomko <jtomko> Reviewed-by: Peter Krempa <pkrempa> Tested-by: Andrea Bolognani <abologna> commit f0f986efa8a8e352fbdce7079ec440a4f3c8f522 Author: Ján Tomko <jtomko> CommitDate: 2020-03-04 12:08:50 +0100 qemu: add code for handling virtiofsd Start virtiofsd for each <filesystem> device using it. Pre-create the socket for communication with QEMU and pass it to virtiofsd. Note that virtiofsd needs to run as root. https://bugzilla.redhat.com/show_bug.cgi?id=1694166 Introduced by QEMU commit a43efa34c7d7b628cbf1ec0fe60043e5c91043ea Signed-off-by: Ján Tomko <jtomko> Reviewed-by: Peter Krempa <pkrempa> Tested-by: Andrea Bolognani <abologna> commit 5c0444a38bb37ddeb7049683ef72d02beab9e617 Author: Ján Tomko <jtomko> CommitDate: 2020-03-04 12:08:50 +0100 qemu: forbid migration with vhost-user-fs device This is not yet supported. Signed-off-by: Ján Tomko <jtomko> Reviewed-by: Daniel P. Berrangé <berrange> Reviewed-by: Peter Krempa <pkrempa> Tested-by: Andrea Bolognani <abologna> commit efaf46811c909ee5333360fba1d75ae82352964a Author: Ján Tomko <jtomko> CommitDate: 2020-03-04 12:08:50 +0100 qemu: validate virtiofs filesystems Reject unsupported configurations. Signed-off-by: Ján Tomko <jtomko> Reviewed-by: Peter Krempa <pkrempa> Tested-by: Andrea Bolognani <abologna> Reviewed-by: Masayoshi Mizuma <m.mizuma.com> commit f04319a5449974f1553458e96c2a6ee25e65ab93 Author: Ján Tomko <jtomko> CommitDate: 2020-03-04 12:08:50 +0100 qemu: add virtiofsd_debug to qemu.conf Add a 'virtiofsd_debug' option for tuning whether to run virtiofsd in debug mode. Signed-off-by: Ján Tomko <jtomko> Reviewed-by: Daniel P. Berrangé <berrange> Reviewed-by: Peter Krempa <pkrempa> Tested-by: Andrea Bolognani <abologna> commit 66079339847dc942b9b673e3040b56b055a8d8f5 Author: Ján Tomko <jtomko> CommitDate: 2020-03-04 12:08:50 +0100 conf: add virtiofs-related elements and attributes Add more elements for tuning the virtiofsd daemon and the vhost-user-fs device: <driver type='virtiofs' queue='1024' xattr='on'> <binary path='/usr/libexec/virtiofsd'> <cache mode='always'/> <lock posix='off' flock='off'/> </binary> </driver> Signed-off-by: Ján Tomko <jtomko> Reviewed-by: Daniel P. Berrangé <berrange> Reviewed-by: Masayoshi Mizuma <m.mizuma.com> Reviewed-by: Peter Krempa <pkrempa> Tested-by: Andrea Bolognani <abologna> commit ecc6ad6b90ad674a903c95d2a637f8b1b5833be2 Author: Ján Tomko <jtomko> CommitDate: 2020-03-04 12:08:50 +0100 conf: qemu: add virtiofs fsdriver type Introduce a new 'virtiofs' driver type for filesystem. <filesystem type='mount' accessmode='passthrough'> <driver type='virtiofs'/> <source dir='/path'/> <target dir='mount_tag'> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </filesystem> Signed-off-by: Ján Tomko <jtomko> Reviewed-by: Daniel P. Berrangé <berrange> Reviewed-by: Peter Krempa <pkrempa> Tested-by: Andrea Bolognani <abologna> commit aecf1f5d702ad710aed99a688f38f05cc304b03a Author: Ján Tomko <jtomko> CommitDate: 2020-03-04 12:08:50 +0100 docs: add virtiofs kbase Add a document describing the usage of virtiofs. Signed-off-by: Ján Tomko <jtomko> Reviewed-by: Peter Krempa <pkrempa> Tested-by: Andrea Bolognani <abologna> commit d99128a62b9f84e3e5b372d1e6419f4f1d1dffe6 Author: Ján Tomko <jtomko> CommitDate: 2020-03-04 12:08:50 +0100 qemu: add QEMU_CAPS_DEVICE_VHOST_USER_FS Introduced by QEMU commit 98fc1ada4cf70af0f1df1a2d7183cf786fc7da05 virtio: add vhost-user-fs base device Released in QEMU v4.2.0. Signed-off-by: Ján Tomko <jtomko> Reviewed-by: Peter Krempa <pkrempa> Acked-by: Stefan Hajnoczi <stefanha> Reviewed-by: Daniel P. Berrangé <berrange> Tested-by: Andrea Bolognani <abologna> commit 99dc98db3d3e2381f322120bf00c25ba0501b092 Author: Ján Tomko <jtomko> CommitDate: 2020-03-04 12:08:50 +0100 qemuxml2xmltest: set driver as privileged Some validation check might reject unprivileged drivers in the future. Signed-off-by: Ján Tomko <jtomko> Reviewed-by: Peter Krempa <pkrempa> Tested-by: Andrea Bolognani <abologna> commit 6baf97ef2c7416f3d81bdc6cf20f121b62c8fd4f Author: Ján Tomko <jtomko> CommitDate: 2020-03-04 12:08:50 +0100 qemu: pass virDomainObjPtr to qemuExtDevicesSetupCgroup Signed-off-by: Ján Tomko <jtomko> Reviewed-by: Peter Krempa <pkrempa> Tested-by: Andrea Bolognani <abologna> commit b164eac5e1d4ebe17e673f0427b70f862a670f94 Author: Ján Tomko <jtomko> CommitDate: 2020-03-04 12:08:50 +0100 qemuExtDevicesStart: pass logManager Pass logManager to qemuExtDevicesStart for future usage. Signed-off-by: Ján Tomko <jtomko> Reviewed-by: Daniel P. Berrangé <berrange> Tested-by: Andrea Bolognani <abologna> commit 3913abd476cfe663db978d9110daa8bdc6d4e5b6 Author: Ján Tomko <jtomko> CommitDate: 2020-03-04 12:08:50 +0100 schema: wrap fsDriver in a choice group Allow adding new groups without changing indentation. Signed-off-by: Ján Tomko <jtomko> Reviewed-by: Peter Krempa <pkrempa> Acked-by: Stefan Hajnoczi <stefanha> Reviewed-by: Daniel P. Berrangé <berrange> Tested-by: Andrea Bolognani <abologna> git describe: v6.1.0-20-g0627150a56
Hi Ján, virtiofsd supports cache mode as none/auto/always, but libvirt can only set none/always. Is that the expected action?
Verified with: libvirt-6.0.0-10.el8.x86_64 qemu-kvm-4.2.0-13.module+el8.2.0+5898+fb4bceae.x86_64 kernel-4.18.0-187.el8.x86_64 Test steps: 1.Define a guest with multiple virtiofs filesystem: #virsh edit vm1 <domain> <memoryBacking> <hugepages> <page size='2048' unit='KiB'/> </hugepages> <access mode='shared'/> </memoryBacking> <cpu mode='host-model' check='partial'> <numa> <cell id='0' cpus='0-7' memory='2097152' unit='KiB' memAccess='shared'/> </numa> </cpu> ... <device> <filesystem type='mount' accessmode='passthrough'> <driver type='virtiofs' queue='512' iommu='on' ats='on'/> <binary path='/usr/libexec/virtiofsd' xattr='on'> <cache mode='none'/> <lock posix='on' flock='on'/> </binary> <source dir='/path1'/> <target dir='mount_tag1'/> <alias name='ua-1035e984-8238-46e1-bf56-b546246e1a39'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </filesystem> <filesystem type='mount' accessmode='passthrough'> <driver type='virtiofs'/> <binary path='/usr/libexec/virtiofsd' xattr='on'> <cache mode='always'/> <lock posix='on' flock='on'/> </binary> <source dir='/path2'/> <target dir='mount_tag1'/> <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </filesystem> </devices> ... </domain> 2.Create dir on the host: #mkdir /path1 #mkdir /path2 2.Allocate hugepages: #virsh allocpages 2M 1024 3.Start guest: #virsh start vm1 4.Check the live xml, can see the virtiofs system: # virsh dumpxml vm1 | grep -A10 filesystem <filesystem type='mount' accessmode='passthrough'> <driver type='virtiofs' queue='512'/> <binary path='/usr/libexec/virtiofsd' xattr='on'> <cache mode='none'/> <lock posix='on' flock='on'/> </binary> <source dir='/path1'/> <target dir='mount_tag'/> <alias name='ua-1035e984-8238-46e1-bf56-b546246e1a39'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </filesystem> <filesystem type='mount' accessmode='passthrough'> <driver type='virtiofs'/> <binary path='/usr/libexec/virtiofsd' xattr='on'> <cache mode='always'/> <lock posix='on' flock='on'/> </binary> <source dir='/path2'/> <target dir='mount_tag1'/> <alias name='fs1'/> <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </filesystem> 5.Check the virtiofsd process: #ps aux | grep -i virtiofs root 38614 0.3 0.0 68424 5728 ? S 05:56 0:00 /usr/libexec/virtiofsd --fd=36 -o source=/path1,cache=always,xattr,flock,posix_lock root 38618 0.3 0.0 68424 5748 ? S 05:56 0:00 /usr/libexec/virtiofsd --fd=36 -o source=/path2,cache=always,xattr,flock,posix_lock root 38671 0.0 0.2 4214096 18596 ? Sl 05:56 0:00 /usr/libexec/virtiofsd --fd=36 -o source=/path1,cache=always,xattr,flock,posix_lock root 38672 0.0 0.3 4214096 26712 ? Sl 05:56 0:00 /usr/libexec/virtiofsd --fd=36 -o source=/path2,cache=always,xattr,flock,posix_lock 6.Check the qemu cmdline: #ps aux | grep -i qemu-kvm -object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu/6-vm1,share=yes,size=2147483648 -numa node,nodeid=0,cpus=0-7,memdev=ram-node0...-chardev socket,id=chr-vu-fs0,path=/var/lib/libvirt/qemu/domain-6-vm1/fs0-virtiofsd.sock -device vhost-user-fs-pci,chardev=chr-vu-fs0,queue-size=1024,tag=mount_tag,bus=pci.6,addr=0x0 -chardev socket,id=chr-vu-fs1,path=/var/lib/libvirt/qemu/domain-6-vm1/fs1-virtiofsd.sock -device vhost-user-fs-pci,chardev=chr-vu-fs1,tag=mount_tag1,bus=pci.8,addr=0x0 7.Log into guest then mount the virtiofs: [root@dhcp19-129-43 ~]#mkdir mount1; mount -t virtiofs mount_tag /mount1 [root@dhcp19-129-43 ~]#mkdir mount2; mount -t virtiofs mount_tag1 /mount2 [root@dhcp19-129-43 ~]#df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 17M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/mapper/rhel-root 8.0G 1.6G 6.5G 20% / /dev/vda1 1014M 165M 850M 17% /boot tmpfs 117M 0 117M 0% /run/user/0 mount_tag 50G 29G 22G 58% /mount1 mount_tag1 50G 29G 22G 58% /mount2 8. Generate a file on the mountpoint in the guest: (guest os)# dd if=/dev/random of=testfile bs=1M count=1024 dd: warning: partial read (115 bytes); suggest iflag=fullblock 0+1024 records in 0+1024 records out 81955 bytes (82 kB, 80 KiB) copied, 0.319422 s, 257 kB/s (guest os)# md5sum /mount1/testfile 7ec0565e8c0e504a1163e0ff358c2c40 /mount1/testfile 9.Check the md5sum value on the host: (host os)# md5sum /path1/testfile 7ec0565e8c0e504a1163e0ff358c2c40 /path1/testfile 10.Also do the same test for mount_tag1; 11.Unmount the mountpoint in the guest: # umount /mount1 # umount /mount2 # df -h | grep mount no output 12.Do the lifecycle test of the guest: start, reboot, resume/suspend, shutdown, destroy 13.Coldplug and Coldunplug virtiofs filesystem: #cat fs.xml <filesystem type='mount' accessmode='passthrough'> <driver type='virtiofs'/> <binary path='/usr/libexec/virtiofsd'/> <source dir='/path3'/> <target dir='mount_tag2> </filesystem> #virsh attach-device vm1 fs.xml --config Device attached successfully #virsh dumpxml vm1 | grep -A6 mount <filesystem type='mount' accessmode='passthrough'> <driver type='virtiofs'/> <binary path='/usr/libexec/virtiofsd'/> <source dir='/path3'/> <target dir='mount_tag2'/> <address type='pci' domain='0x0000' bus='0x09' slot='0x00' function='0x0'/> </filesystem> #virsh detach-device vm1 fs.xml --config Device attached successfully #virsh dumpxml vm1 | grep -A6 mount no output 14.Do managedsave: #virsh managedsave vm1 error: Failed to save domain vm1 state error: Requested operation is not valid: migration with virtiofs device is not supported
------- Comment From hannsj_uhl.com 2020-03-13 11:33 EDT------- Comment from Leonardo Augusto Guimaraes Garcia 2020-03-13 10:18:36 CDT This bug is not related to Power. It has been opened for the generic support of virtio-fs in Red Hat products. Power is broken, and we don't have the patches to fix it accepted upstream yet. I.e. for IBM Power this RHEL8.2 feature request is not applicable. Thanks.
(In reply to yafu from comment #33) > Hi Ján, > > virtiofsd supports cache mode as none/auto/always, but libvirt can only set > none/always. Is that the expected action? Yes, I omitted that one on purpose - it should be the default that is used when you don't specify the cache mode.
Test virtiofsd_debug with: libvirt-6.0.0-14.module+el8.2.0+6069+78a1cb09.x86_64 qemu-kvm-4.2.0-15.module+el8.2.0+6029+618ef2ec.x86_64 Test steps: 1.Enable virtiofs_debug in qemu.conf and restart libvirtd service: #cat /etc/libvirt/qemu.conf virtiofsd_debug = 1 #systemctl restart libvirtd 2.Start a guest with viriofs filesystem devcie: #virsh dumpxml vm1 <domain> <memoryBacking> <hugepages> <page size='2048' unit='KiB'/> </hugepages> <access mode='shared'/> </memoryBacking> <cpu mode='host-model' check='partial'> <numa> <cell id='0' cpus='0-7' memory='2097152' unit='KiB' memAccess='shared'/> </numa> </cpu> ... <device> <filesystem type='mount' accessmode='passthrough'> <driver type='virtiofs' queue='512' iommu='on' ats='on'/> <binary path='/usr/libexec/virtiofsd' xattr='on'> <cache mode='none'/> <lock posix='on' flock='on'/> </binary> <source dir='/path1'/> <target dir='mount_tag1'/> <alias name='fs1'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </filesystem> ... </devices> ... </domain> 3.Check the virtiofs log for the fs1 of the guest: #cat /var/log/libvirt/qemu/vm1-fs1-virtiofsd.log ... [970719817112398] [ID: 00206028] virtio_session_mount: Waiting for vhost-user socket connection... [970719931941416] [ID: 00206028] virtio_session_mount: Received vhost-user socket connection [970719941099902] [ID: 00000001] virtio_loop: Entry [970719941141277] [ID: 00000001] virtio_loop: Waiting for VU event [970720568113603] [ID: 00000001] virtio_loop: Got VU event [970720568147020] [ID: 00000001] virtio_loop: Waiting for VU event [970720568185506] [ID: 00000001] virtio_loop: Got VU event [970720568195900] [ID: 00000001] virtio_loop: Waiting for VU event [970720568206913] [ID: 00000001] virtio_loop: Got VU event [970720568216906] [ID: 00000001] virtio_loop: Waiting for VU event [970720568219626] [ID: 00000001] virtio_loop: Got VU event [970720568228800] [ID: 00000001] virtio_loop: Waiting for VU event [970720568274365] [ID: 00000001] virtio_loop: Got VU event [970720568285453] [ID: 00000001] virtio_loop: Waiting for VU event ... 4.Login the guest and mount the virtiofs shared dir: [guest]#mount -t vritiofs mount_tag1 /mnt 5.Check the virtiofs log for fs1 of the guest: [host]#cat /var/log/libvirt/qemu/vm1-fs1-virtiofsd.log ... [971208276271081] [ID: 00000096] unique: 2, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0 [971208276278599] [ID: 00000096] INIT: 7.31 [971208276281297] [ID: 00000096] flags=0x03fffffb [971208276283368] [ID: 00000096] max_readahead=0x00020000 [971208276286133] [ID: 00000096] lo_init: activating flock locks [971208276288316] [ID: 00000096] lo_init: activating posix locks [971208276290825] [ID: 00000096] INIT: 7.31 [971208276292862] [ID: 00000096] flags=0x0044f43b [971208276294790] [ID: 00000096] max_readahead=0x00020000 [971208276296717] [ID: 00000096] max_write=0x00100000 [971208276298638] [ID: 00000096] max_background=0 [971208276300822] [ID: 00000096] congestion_threshold=0 [971208276302549] [ID: 00000096] time_gran=1 [971208276304863] [ID: 00000096] unique: 2, success, outsize: 80 [971208276307556] [ID: 00000096] virtio_send_msg: elem 0: with 2 in desc of length 80 6.Disable the virtiofs_debug in the qemu.conf: #cat /etc/libvirt/qemu.conf virtiofsd_debug = 1 #systemctl restart libvirtd 7.Login the guest,unmount and remount the virtiofs shared dir: [guest]#umount /mnt [guest]#mount -t vritiofs mount_tag1 /mnt 8.Check the virtiofs log for fs1 of the guest, no more output.
Memory link check for destroy/start guest with virtiofs device. Test steps: 1.# systemctl stop libvirtd 2.# systemctl stop virtlogd 3.# virtlogd -d 4.# valgrind --leak-check=full --trace-children=no --child-silent-after-fork=yes libvirtd Open another terminal to do the following test 5.#setenforce 0 (since Bug 1812427 - Failed to start guest with virtiofs filesystem device if starting libvirtd foreground) 6.Start a guest with virtiofs device: #virsh start vm1 7.Destroy a guest with virtiofs device: #virsh destroy vm1 8.Check the memory link in the terminal 1: ==199755== LEAK SUMMARY: ==199755== definitely lost: 0 bytes in 0 blocks ==199755== indirectly lost: 0 bytes in 0 blocks ==199755== possibly lost: 1,360 bytes in 19 blocks ==199755== still reachable: 799,538 bytes in 11,635 blocks ==199755== of which reachable via heuristic: ==199755== newarray : 1,536 bytes in 16 blocks ==199755== suppressed: 0 bytes in 0 blocks ==199755== Reachable blocks (those to which a pointer was found) are not shown. ==199755== To see them, rerun with: --leak-check=full --show-leak-kinds=all ==199755== ==199755== For lists of detected and suppressed errors, rerun with: -s ==199755== ERROR SUMMARY: 19 errors from 19 contexts (suppressed: 0 from 0)
Also do negative test for the xml validate: queue_size larger than unsigned int, path not in the absolute path, cache mode not none/always, etc.
Hi Ján, The virtiofs_debug log is named as '$vmname-alias name-viriofsd.log'. If the alias name is 'ua+uuid', such as: 'ua-1035e984-8238-46e1-bf56-b546246e1a39', the log name is about 50 chars longer than the vm name and it may cause the name of virtiofs debug log too long to create. Then the guest will fail to start. Could you help to check that please?
Right, I thought the 108 character limit for UNIX socket paths would be a bigger issue, but those are placed in the domain-$id-$shortName directory, just the log files use the long name. Can you file a separate bug for that? The logging location needs changing anyway: https://www.redhat.com/archives/libvir-list/2020-March/msg00818.html
(In reply to Ján Tomko from comment #41) > Right, I thought the 108 character limit for UNIX socket paths would be a > bigger issue, but those are placed in the domain-$id-$shortName directory, > just the log files use the long name. > > Can you file a separate bug for that? The logging location needs changing > anyway: > https://www.redhat.com/archives/libvir-list/2020-March/msg00818.html Thanks. File a bug to track the issue: https://bugzilla.redhat.com/show_bug.cgi?id=1817401
According to comment 34 - comment 39, move the bug to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2017