Bug 1935490 - [virtio-fs] guest stuck when do fio testing in the shared file system
Summary: [virtio-fs] guest stuck when do fio testing in the shared file system
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.4
Hardware: x86_64
OS: Windows
high
high
Target Milestone: rc
: 8.4
Assignee: Hanna Czenczek
QA Contact: xiagao
URL:
Whiteboard:
Depends On:
Blocks: 1948358
TreeView+ depends on / blocked
 
Reported: 2021-03-05 01:41 UTC by menli@redhat.com
Modified: 2021-07-06 13:55 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-07-06 13:55:34 UTC
Type: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description menli@redhat.com 2021-03-05 01:41:47 UTC
Description of problem:
guest stuck when do fio testing in the shared file system(Z:), can not uninstall virtio-fs driver.

Version-Release number of selected component (if applicable):
qemu-kvm-5.2.0-9.module+el8.4.0+10182+4161bd91.x86_64
kernel-core-4.18.0-287.el8.dt4.x86_64
seabios-1.14.0-1.module+el8.3.0+7638+07cf13d2.x86_64
virtio-win-prewhql-196

How reproducible:
9/10 (win2019)

Steps to Reproduce:
1. start virtiofsd daemon on host
#/usr/libexec/virtiofsd --socket-path=/tmp/vhostqemu -o source=/tmp/virtiofs_test -o cache=always

2. boot up win2019 guest with virtiofs device.
-smp 8 \ 
-m 4G \ 
-object memory-backend-file,id=mem,size=4G,mem-path=/dev/shm,share=on \ 
-numa node,memdev=mem \ 
-chardev socket,id=char0,path=/tmp/vhostqemu \ 
-device vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=myfs \ 

3. install winfsp tool and virtiofs driver in guest
4. run virtiofs.exe in the dir.
virtiofs.exe -d -1 -D -
5. open computer in guest and see a new virtioFS(z:) volume is shown.
6. run fio testing on Z:
fio.exe --name=stress --filename=Z:/fs_test --ioengine=windowsaio --rw=write --direct=1 --bs=4K --size=1G --iodepth=256 --numjobs=128 --runtime=1800 --thread

Actual results:
guest stuck when do fio on Z:

Expected results:
guest can be operated normally.

Additional info:
1. use fio-3.1-x64.msi for testing
2. easier reproduce on win2019
3. not reproduce on qemu-kvm-5.1.0-20.module+el8.3.1+9918+230f5c26.x86_64, so it should be a qemu regression issue.

Comment 1 Ademar Reis 2021-03-15 18:00:27 UTC
(In reply to menli from comment #0)
> 3. not reproduce on qemu-kvm-5.1.0-20.module+el8.3.1+9918+230f5c26.x86_64,
> so it should be a qemu regression issue.

Max: any ideas for which patches between 5.1.0-20 and 5.2.0-9 could be the culprit?

Comment 2 Hanna Czenczek 2021-03-16 10:47:54 UTC
(In reply to Ademar Reis from comment #1)
> (In reply to menli from comment #0)
> > 3. not reproduce on qemu-kvm-5.1.0-20.module+el8.3.1+9918+230f5c26.x86_64,
> > so it should be a qemu regression issue.
> 
> Max: any ideas for which patches between 5.1.0-20 and 5.2.0-9 could be the
> culprit?

No, unfortunately not.  Looking through the commits in that range that concern virtiofsd, I can’t see anything suspicious...

Comment 3 Klaus Heinrich Kiwi 2021-06-15 15:41:53 UTC
(In reply to Max Reitz from comment #2)
> (In reply to Ademar Reis from comment #1)
> > (In reply to menli from comment #0)
> > > 3. not reproduce on qemu-kvm-5.1.0-20.module+el8.3.1+9918+230f5c26.x86_64,
> > > so it should be a qemu regression issue.
> > 
> > Max: any ideas for which patches between 5.1.0-20 and 5.2.0-9 could be the
> > culprit?
> 
> No, unfortunately not.  Looking through the commits in that range that
> concern virtiofsd, I can’t see anything suspicious...

Can the submitter confirm whether this is still an issue with the latest available versions?

If so, perhaps we could devise a bisect strategy to find the culprit?

Comment 4 xiagao 2021-06-16 07:57:19 UTC
(In reply to Klaus Heinrich Kiwi from comment #3)
> (In reply to Max Reitz from comment #2)
> > (In reply to Ademar Reis from comment #1)
> > > (In reply to menli from comment #0)
> > > > 3. not reproduce on qemu-kvm-5.1.0-20.module+el8.3.1+9918+230f5c26.x86_64,
> > > > so it should be a qemu regression issue.
> > > 
> > > Max: any ideas for which patches between 5.1.0-20 and 5.2.0-9 could be the
> > > culprit?
> > 
> > No, unfortunately not.  Looking through the commits in that range that
> > concern virtiofsd, I can’t see anything suspicious...
> 
> Can the submitter confirm whether this is still an issue with the latest
> available versions?
> 
> If so, perhaps we could devise a bisect strategy to find the culprit?

Didn't hit this issue in recently testing.

pkg info:
qemu-kvm-6.0.0-19.module+el8.5.0+11385+6e7d542e.x86_64
qemu-kvm-6.0.0-5.el9.x86_64

Comment 5 Klaus Heinrich Kiwi 2021-07-06 13:55:34 UTC
I'll assume this is fixed in the currentrelease.


Note You need to log in before you can comment on or make changes to this bug.