Description of problem: guest stuck when do fio testing in the shared file system(Z:), can not uninstall virtio-fs driver. Version-Release number of selected component (if applicable): qemu-kvm-5.2.0-9.module+el8.4.0+10182+4161bd91.x86_64 kernel-core-4.18.0-287.el8.dt4.x86_64 seabios-1.14.0-1.module+el8.3.0+7638+07cf13d2.x86_64 virtio-win-prewhql-196 How reproducible: 9/10 (win2019) Steps to Reproduce: 1. start virtiofsd daemon on host #/usr/libexec/virtiofsd --socket-path=/tmp/vhostqemu -o source=/tmp/virtiofs_test -o cache=always 2. boot up win2019 guest with virtiofs device. -smp 8 \ -m 4G \ -object memory-backend-file,id=mem,size=4G,mem-path=/dev/shm,share=on \ -numa node,memdev=mem \ -chardev socket,id=char0,path=/tmp/vhostqemu \ -device vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=myfs \ 3. install winfsp tool and virtiofs driver in guest 4. run virtiofs.exe in the dir. virtiofs.exe -d -1 -D - 5. open computer in guest and see a new virtioFS(z:) volume is shown. 6. run fio testing on Z: fio.exe --name=stress --filename=Z:/fs_test --ioengine=windowsaio --rw=write --direct=1 --bs=4K --size=1G --iodepth=256 --numjobs=128 --runtime=1800 --thread Actual results: guest stuck when do fio on Z: Expected results: guest can be operated normally. Additional info: 1. use fio-3.1-x64.msi for testing 2. easier reproduce on win2019 3. not reproduce on qemu-kvm-5.1.0-20.module+el8.3.1+9918+230f5c26.x86_64, so it should be a qemu regression issue.
(In reply to menli from comment #0) > 3. not reproduce on qemu-kvm-5.1.0-20.module+el8.3.1+9918+230f5c26.x86_64, > so it should be a qemu regression issue. Max: any ideas for which patches between 5.1.0-20 and 5.2.0-9 could be the culprit?
(In reply to Ademar Reis from comment #1) > (In reply to menli from comment #0) > > 3. not reproduce on qemu-kvm-5.1.0-20.module+el8.3.1+9918+230f5c26.x86_64, > > so it should be a qemu regression issue. > > Max: any ideas for which patches between 5.1.0-20 and 5.2.0-9 could be the > culprit? No, unfortunately not. Looking through the commits in that range that concern virtiofsd, I can’t see anything suspicious...
(In reply to Max Reitz from comment #2) > (In reply to Ademar Reis from comment #1) > > (In reply to menli from comment #0) > > > 3. not reproduce on qemu-kvm-5.1.0-20.module+el8.3.1+9918+230f5c26.x86_64, > > > so it should be a qemu regression issue. > > > > Max: any ideas for which patches between 5.1.0-20 and 5.2.0-9 could be the > > culprit? > > No, unfortunately not. Looking through the commits in that range that > concern virtiofsd, I can’t see anything suspicious... Can the submitter confirm whether this is still an issue with the latest available versions? If so, perhaps we could devise a bisect strategy to find the culprit?
(In reply to Klaus Heinrich Kiwi from comment #3) > (In reply to Max Reitz from comment #2) > > (In reply to Ademar Reis from comment #1) > > > (In reply to menli from comment #0) > > > > 3. not reproduce on qemu-kvm-5.1.0-20.module+el8.3.1+9918+230f5c26.x86_64, > > > > so it should be a qemu regression issue. > > > > > > Max: any ideas for which patches between 5.1.0-20 and 5.2.0-9 could be the > > > culprit? > > > > No, unfortunately not. Looking through the commits in that range that > > concern virtiofsd, I can’t see anything suspicious... > > Can the submitter confirm whether this is still an issue with the latest > available versions? > > If so, perhaps we could devise a bisect strategy to find the culprit? Didn't hit this issue in recently testing. pkg info: qemu-kvm-6.0.0-19.module+el8.5.0+11385+6e7d542e.x86_64 qemu-kvm-6.0.0-5.el9.x86_64
I'll assume this is fixed in the currentrelease.