RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2057252 - [virtiofsd]Can't access to the shared directory on windows guest with the new virtiofsd(rust)
Summary: [virtiofsd]Can't access to the shared directory on windows guest with the new...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: virtiofsd
Version: 9.0
Hardware: x86_64
OS: Windows
high
high
Target Milestone: rc
: ---
Assignee: Sergio Lopez
QA Contact: xiagao
URL:
Whiteboard:
Depends On:
Blocks: 2062572 2063722 2065173
TreeView+ depends on / blocked
 
Reported: 2022-02-23 04:30 UTC by xiagao
Modified: 2022-05-17 15:35 UTC (History)
20 users (show)

Fixed In Version: virtiofsd-1.1.0-4.el9_0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2062572 2065173 (view as bug list)
Environment:
Last Closed: 2022-05-17 15:35:09 UTC
Type: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-113170 0 None None None 2022-02-23 04:34:30 UTC
Red Hat Product Errata RHBA-2022:3888 0 None None None 2022-05-17 15:35:21 UTC

Description xiagao 2022-02-23 04:30:06 UTC
Description of problem:
Start a shared directory with guest, in guest start virtiofs.exe and try to access the mounted volume(z:), but failed.

Version-Release number of selected component (if applicable):
qemu-kvm-6.2.0-8.el9.x86_64
virtiofsd-1.1.0-3.el9.x86_64
kernel-5.14.0-54.kpq0.el9.x86_64
RHEL-9.0.0-20220205.d.3
virtio-win-prewhql-215/216

How reproducible:
100%

Steps to Reproduce:
1.start virtiofsd with '--no-killpriv-v2' option
[# /usr/libexec/virtiofsd  --socket-path=/tmp/sock1 -o source=/home/test -o cache=none --log-level debug --no-killpriv-v2
[2022-02-22T15:19:51Z INFO  virtiofsd] Waiting for vhost-user socket connection...

2. start win2019 guest
-chardev socket,id=char0,path=/tmp/sock1 \
-device vhost-user-fs-pci,chardev=char0,tag=myfs,queue-size=1024 \
-object memory-backend-file,id=mem,size=4G,mem-path=/dev/shm,share=on \
-numa node,memdev=mem \

3. start virtiofs.exe in guest,z: is assigned.
virtiofs.exe -d -1 -D C:\log2.txt

4. try to access to this volume


Actual results:
Can't enter to z:, the shared volume doesn't work.(see attachment)

From virtiofsd log:
[2022-02-22T15:19:54Z INFO  virtiofsd] Client connected, servicing requests
[2022-02-22T15:20:02Z DEBUG virtiofsd] HIPRIO_QUEUE_EVENT
[2022-02-22T15:20:02Z DEBUG virtiofsd] HIPRIO_QUEUE_EVENT
[2022-02-22T15:21:15Z DEBUG virtiofsd] HIPRIO_QUEUE_EVENT
[2022-02-22T15:21:15Z DEBUG virtiofsd::server] Received request: 26
[2022-02-22T15:21:15Z DEBUG virtiofsd::server] Replying OK, header: OutHeader { len: 80, error: 0, unique: 2 }
[2022-02-22T15:21:35Z DEBUG virtiofsd] HIPRIO_QUEUE_EVENT
[2022-02-22T15:21:35Z DEBUG virtiofsd::server] Received request: 26
[2022-02-22T15:21:35Z DEBUG virtiofsd::server] Replying OK, header: OutHeader { len: 80, error: 0, unique: 2 }
[2022-02-22T15:21:38Z DEBUG virtiofsd] HIPRIO_QUEUE_EVENT
[2022-02-22T15:21:38Z DEBUG virtiofsd::server] Received request: 17
[2022-02-22T15:21:38Z DEBUG virtiofsd::server] Replying OK, header: OutHeader { len: 96, error: 0, unique: 3 }
[2022-02-22T15:21:38Z DEBUG virtiofsd] HIPRIO_QUEUE_EVENT
[2022-02-22T15:21:38Z DEBUG virtiofsd::server] Received request: 1
[2022-02-22T15:21:38Z DEBUG virtiofsd::server] Replying ERROR, header: OutHeader { len: 16, error: -2, unique: 4 }
[2022-02-22T15:21:38Z DEBUG virtiofsd] HIPRIO_QUEUE_EVENT
[2022-02-22T15:21:38Z DEBUG virtiofsd::server] Received request: 1
[2022-02-22T15:21:38Z DEBUG virtiofsd::server] Replying ERROR, header: OutHeader { len: 16, error: -2, unique: 5 }
[2022-02-22T15:21:38Z DEBUG virtiofsd] HIPRIO_QUEUE_EVENT
[2022-02-22T15:21:38Z DEBUG virtiofsd::server] Received request: 14
[2022-02-22T15:21:38Z DEBUG virtiofsd::server] Replying ERROR, header: OutHeader { len: 16, error: -9, unique: 6 }
[2022-02-22T15:21:38Z DEBUG virtiofsd] HIPRIO_QUEUE_EVENT
[2022-02-22T15:21:38Z DEBUG virtiofsd::server] Received request: 18
[2022-02-22T15:21:38Z DEBUG virtiofsd::server] Replying ERROR, header: OutHeader { len: 16, error: -9, unique: 7 }
[2022-02-22T15:21:38Z DEBUG virtiofsd] HIPRIO_QUEUE_EVENT
[2022-02-22T15:21:38Z DEBUG virtiofsd::server] Received request: 1
[2022-02-22T15:21:38Z DEBUG virtiofsd::server] Replying ERROR, header: OutHeader { len: 16, error: -2, unique: 8 }
[2022-02-22T15:21:38Z DEBUG virtiofsd] HIPRIO_QUEUE_EVENT
[2022-02-22T15:21:38Z DEBUG virtiofsd::server] Received request: 1
[2022-02-22T15:21:38Z DEBUG virtiofsd::server] Replying ERROR, header: OutHeader { len: 16, error: -2, unique: 9 }
[2022-02-22T15:21:38Z DEBUG virtiofsd] HIPRIO_QUEUE_EVENT
[2022-02-22T15:21:38Z DEBUG virtiofsd::server] Received request: 14
[2022-02-22T15:21:38Z DEBUG virtiofsd::server] Replying ERROR, header: OutHeader { len: 16, error: -9, unique: 10 }
[2022-02-22T15:21:38Z DEBUG virtiofsd] HIPRIO_QUEUE_EVENT
[2022-02-22T15:21:38Z DEBUG virtiofsd::server] Received request: 18
[2022-02-22T15:21:38Z DEBUG virtiofsd::server] Replying ERROR, header: OutHeader { len: 16, error: -9, unique: 11 }
[2022-02-22T15:21:38Z DEBUG virtiofsd] HIPRIO_QUEUE_EVENT
[2022-02-22T15:21:38Z DEBUG virtiofsd::server] Received request: 1
[2022-02-22T15:21:38Z DEBUG virtiofsd::server] Replying ERROR, header: OutHeader { len: 16, error: -2, unique: 12 }
[2022-02-22T15:21:38Z DEBUG virtiofsd] HIPRIO_QUEUE_EVENT

Virtiofs.exe log is in the attachment

Expected results:
shared dir works well.

Additional info:

Comment 5 xiagao 2022-03-07 02:59:02 UTC
I set test blocker as it blocks the function test on Windows guests on the new virtiofsd.

Thanks,
Xiaoling

Comment 9 Sergio Lopez 2022-03-10 07:03:00 UTC
I've tested the Windows driver with virtiofsd (Rust) and identified the following issues:

1. The Windows driver relies on the "len" field of used descriptors. As the Linux driver doesn't, virtiofsd doesn't bother to set it up. This needs to be fixed in virtiofsd.

2. The Windows driver, under some circumstances, calls to OPENDIR with O_RDWR. This is invalid and should be fixed in the Windows driver. To speed things up, I can make a downstream-only patch to work around this issue in virtiofsd.

3. The Windows driver pushes all requests through HIPRIO_QUEUE. This queue should only be used for FUSE_INTERRUPT, FUSE_FORGET, and FUSE_BATCH_FORGET. While this works with the current (v1.1.0) version of virtiofsd, it will not work with future versions. This needs to be fixed in the Windows driver.

I suggest the following Action Plan:

1. Reassign this BZ (or a clone of it) to virtiofsd so I can make a new release of virtiofsd with a fix for (1) and a work around for (2).

2. Create BZs for virtio-win to address (2) and (3), targeting RHEL 9.1.

BTW, there's no need to pass "--no-killpriv-v2" to the daemon. If the client is found to not support KILLPRIV_V2, the feature is automatically disabled. The message in the log is just a warning.

Sergio.

Comment 10 xiagao 2022-03-10 07:42:50 UTC
(In reply to Sergio Lopez from comment #9)
> I've tested the Windows driver with virtiofsd (Rust) and identified the
> following issues:
> 
> 1. The Windows driver relies on the "len" field of used descriptors. As the
> Linux driver doesn't, virtiofsd doesn't bother to set it up. This needs to
> be fixed in virtiofsd.
> 
> 2. The Windows driver, under some circumstances, calls to OPENDIR with
> O_RDWR. This is invalid and should be fixed in the Windows driver. To speed
> things up, I can make a downstream-only patch to work around this issue in
> virtiofsd.
> 
> 3. The Windows driver pushes all requests through HIPRIO_QUEUE. This queue
> should only be used for FUSE_INTERRUPT, FUSE_FORGET, and FUSE_BATCH_FORGET.
> While this works with the current (v1.1.0) version of virtiofsd, it will not
> work with future versions. This needs to be fixed in the Windows driver.
> 
> I suggest the following Action Plan:
> 
> 1. Reassign this BZ (or a clone of it) to virtiofsd so I can make a new
> release of virtiofsd with a fix for (1) and a work around for (2).

Thanks Sergio.
I change this BZ to virtiofsd.


> 
> 2. Create BZs for virtio-win to address (2) and (3), targeting RHEL 9.1.

Clone a new BZ for virtio-win.
https://bugzilla.redhat.com/show_bug.cgi?id=2062572 


> 
> BTW, there's no need to pass "--no-killpriv-v2" to the daemon. If the client
> is found to not support KILLPRIV_V2, the feature is automatically disabled.
> The message in the log is just a warning.
> 
> Sergio.

Comment 11 Sergio Lopez 2022-03-10 11:31:59 UTC
PR created upstream:

https://gitlab.com/virtio-fs/virtiofsd/-/merge_requests/101

Comment 16 xiagao 2022-03-21 08:24:38 UTC
(In reply to Sergio Lopez from comment #11)
> PR created upstream:
> 
> https://gitlab.com/virtio-fs/virtiofsd/-/merge_requests/101

Hi Sergio, I want to ask the process about this patch?
Thank you.

Comment 17 Sergio Lopez 2022-03-21 08:34:05 UTC
(In reply to xiagao from comment #16)
> (In reply to Sergio Lopez from comment #11)
> > PR created upstream:
> > 
> > https://gitlab.com/virtio-fs/virtiofsd/-/merge_requests/101
> 
> Hi Sergio, I want to ask the process about this patch?
> Thank you.

Hi, the patch has been merged upstream:

https://gitlab.com/virtio-fs/virtiofsd/-/commit/7975bc32c637a4d471d8fefffd132eb3e63f3aba

Comment 21 xiagao 2022-03-23 12:58:52 UTC
Run a test loop on win2019, the results are good.

pkg:
virtiofsd-1.1.0-4.el9_0.x86_64
kernel-5.14.0-70.el9.x86_64
qemu-kvm-6.2.0-12.el9.x86_64
virtio-win-prewhql-217

Comment 27 errata-xmlrpc 2022-05-17 15:35:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (new packages: virtiofsd), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:3888


Note You need to log in before you can comment on or make changes to this bug.