Bug 2222217
| Summary: | virtiofsd stops responding after pausing and resuming VM | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 9 | Reporter: | German Maglione <gmaglione> |
| Component: | virtiofsd | Assignee: | German Maglione <gmaglione> |
| Status: | VERIFIED --- | QA Contact: | xiagao |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 9.3 | CC: | jinzhao, juzhang, virt-maint, xiagao |
| Target Milestone: | rc | Keywords: | Triaged |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | virtiofsd-1.7.0-1.el9 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | Bug | |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 2222221 | ||
| Bug Blocks: | |||
|
Description
German Maglione
2023-07-12 09:39:53 UTC
Windows driver also hit I didn't reproduce this issue on the following env. Guest: rhel9.3 5.14.0-333.el9.x86_64 host: rhel9.3 5.14.0-324.el9.x86_64 qemu-kvm-8.0.0-5.el9.x86_64 virtio-win-prewhql-0.1-239 kernel-5.14.0-324.el9.x86_64 edk2-ovmf-20230301gitf80f052277c8-5.el9.noarch Steps like comment 0: 1. Create a VM with virtiofs device # /usr/libexec/virtiofsd --shared-dir /home/test --socket-path /tmp/sock1 --lo│(qemu) boot-ovmf.sh: line 48: -chardev: command not found g-level debug 2. Boot the VM and mount virtiofs -chardev socket,id=char_virtiofs_fs,path=/tmp/sock1 \ -device vhost-user-fs-pci,id=vufs_virtiofs_fs,chardev=char_virtiofs_fs,tag=myfs,bus=pcie-root-port-3,addr=0x0 \ 3. stop and cont vm (qemu) stop (qemu) cont 4. check virtiofs in guest. Results: it works well, can read/write in virtiofs inside guest. Results Hi German, could you help to check the steps above if you're available, thanks. (In reply to xiagao from comment #3) > Hi German, could you help to check the steps above if you're available, > thanks. The steps are ok, but if you check the debug output of virtiofsd, you will see the virtiofsd will repeat the first operation in the VQ, until it "catch-up" with the entry in the guest, like in my tests: the uniq value is incremented with each operation, but here we keep using the number 20 [2023-07-17T14:49:21Z DEBUG virtiofsd] QUEUE_EVENT [2023-07-17T14:49:21Z DEBUG virtiofsd::server] Received request: opcode=Getattr (3), inode=1, unique=20, pid=847 [2023-07-17T14:49:21Z DEBUG virtiofsd::server] Replying OK, header: OutHeader { len: 120, error: 0, unique: 20 } [2023-07-17T14:49:21Z DEBUG virtiofsd::server] Received request: opcode=Getattr (3), inode=1, unique=20, pid=847 [2023-07-17T14:49:21Z DEBUG virtiofsd::server] Replying OK, header: OutHeader { len: 120, error: 0, unique: 20 } [2023-07-17T14:49:21Z DEBUG virtiofsd::server] Received request: opcode=Getattr (3), inode=1, unique=20, pid=847 ... [2023-07-17T14:49:21Z DEBUG virtiofsd::server] Replying OK, header: OutHeader { len: 120, error: 0, unique: 20 } [2023-07-17T14:49:21Z DEBUG virtiofsd::server] Received request: opcode=Getattr (3), inode=1, unique=20, pid=847 [2023-07-17T14:49:21Z DEBUG virtiofsd::server] Replying OK, header: OutHeader { len: 120, error: 0, unique: 20 } to make it failing, you should set a small queue-size in qemu, like: -device vhost-user-fs-pci,queue-size=16,... \ (In reply to German Maglione from comment #4) > (In reply to xiagao from comment #3) > > Hi German, could you help to check the steps above if you're available, > > thanks. > > The steps are ok, but if you check the debug output of virtiofsd, you will > see > the virtiofsd will repeat the first operation in the VQ, until it "catch-up" > with > the entry in the guest, like in my tests: the uniq value is incremented with > each > operation, but here we keep using the number 20 > > [2023-07-17T14:49:21Z DEBUG virtiofsd] QUEUE_EVENT > [2023-07-17T14:49:21Z DEBUG virtiofsd::server] Received request: > opcode=Getattr (3), inode=1, unique=20, pid=847 > [2023-07-17T14:49:21Z DEBUG virtiofsd::server] Replying OK, header: > OutHeader { len: 120, error: 0, unique: 20 } > [2023-07-17T14:49:21Z DEBUG virtiofsd::server] Received request: > opcode=Getattr (3), inode=1, unique=20, pid=847 > [2023-07-17T14:49:21Z DEBUG virtiofsd::server] Replying OK, header: > OutHeader { len: 120, error: 0, unique: 20 } > [2023-07-17T14:49:21Z DEBUG virtiofsd::server] Received request: > opcode=Getattr (3), inode=1, unique=20, pid=847 > ... > [2023-07-17T14:49:21Z DEBUG virtiofsd::server] Replying OK, header: > OutHeader { len: 120, error: 0, unique: 20 } > [2023-07-17T14:49:21Z DEBUG virtiofsd::server] Received request: > opcode=Getattr (3), inode=1, unique=20, pid=847 > [2023-07-17T14:49:21Z DEBUG virtiofsd::server] Replying OK, header: > OutHeader { len: 120, error: 0, unique: 20 } > > > to make it failing, you should set a small queue-size in qemu, like: > -device vhost-user-fs-pci,queue-size=16,... \ Thank you. With queue-size=16, can reproduce the problem. Pre-verify this bz, as it works with virtiofsd-1.7 version. It works with virtiofsd-1.7, so verify it. |