RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2018072 - [virtiofs]NFS/xfstests generic/650 failure -.nfs000000003005d939002af3d4': Device or resource busy
Summary: [virtiofs]NFS/xfstests generic/650 failure -.nfs000000003005d939002af3d4': De...
Keywords:
Status: CLOSED CANTFIX
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: qemu-kvm
Version: 9.0
Hardware: All
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: German Maglione
QA Contact: xiagao
URL:
Whiteboard:
Depends On:
Blocks: 1948374
TreeView+ depends on / blocked
 
Reported: 2021-10-28 07:35 UTC by xiagao
Modified: 2022-10-12 08:12 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-10-11 09:29:00 UTC
Type: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gitlab virtio-fs virtiofsd issues 61 0 None opened [traking] Underlay NFS filesystem and silly rename 2022-10-11 09:24:45 UTC
Red Hat Issue Tracker RHELPLAN-100904 0 None None None 2021-10-28 07:39:28 UTC

Description xiagao 2021-10-28 07:35:51 UTC
Description of problem:
When I run generic/650 test on virtiofs(nfs backend),hit the below error
[root@bootp-73-75-110 xfstests-dev]# date;./check -virtiofs generic/650;date
Thu Oct 28 14:13:08 CST 2021
FSTYP         -- virtiofs
PLATFORM      -- Linux/x86_64 bootp-73-75-110 5.14.0-8.el9.x86_64 #1 SMP Tue Oct 19 03:58:57 EDT 2021
MKFS_OPTIONS  -- myfs2
MOUNT_OPTIONS -- -o context=system_u:object_r:root_t:s0 myfs2 /mnt/myfs2

generic/650	
- output mismatch (see /home/xfstests-dev/results//generic/650.out.bad)
    --- tests/generic/650.out	2021-10-27 11:50:52.593093368 +0800
    +++ /home/xfstests-dev/results//generic/650.out.bad	2021-10-28 14:54:09.317778271 +0800
    @@ -1,2 +1,1597 @@
     QA output created by 650
     Silence is golden.
    +rm: cannot remove '/mnt/myfs1/650/p0/d5/d17/d36/d93/d21/d122/d7db/dc68/d225': Directory not empty
    +rm: cannot remove '/mnt/myfs1/650/p0/d5/d17/d36/d93/d21/d122/d7db/dc68/d333/d89b/d1c8/d2cc/d494/daba': Directory not empty
    +rm: cannot remove '/mnt/myfs1/650/p0/d5/d17/d36/d93/d21/d122/d7db/dc68/d333/d89b/d1c8/.nfs000000003005d939002af3d4': Device or resource busy
    +rm: cannot remove '/mnt/myfs1/650/p0/d5/d17/d36/d93/d21/d122/d7db/dc68/d333/d89b/d8ab/dc4e': Directory not empty
    +rm: cannot remove '/mnt/myfs1/650/p0/d5/d17/d36/d93/d21/d122/d7db/dc68/d333/d89b/d8ab/d1457': Directory not empty
    ...
    (Run 'diff -u /home/xfstests-dev/tests/generic/650.out /home/xfstests-dev/results//generic/650.out.bad'  to see the entire diff)
Ran: generic/650
Failures: generic/650
Failed 1 of 1 tests
Thu Oct 28 14:54:09 CST 2021


Version-Release number of selected component (if applicable):
kernel-5.14.0-8.el9.x86_64(guest)
kernel-5.14.0-5.el9.x86_64(host)
qemu-kvm-6.1.0-6.el9.x86_64
qemu-virtiofsd-6.1.0-6.el9.x86_64

How reproducible:
100%

Steps to Reproduce:
1.set up a locl nfs and mount
127.0.0.1:/home/virtio_fs1_test_nfs on /home/virtio_fs1_test type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=127.0.0.1,local_lock=none,addr=127.0.0.1)
127.0.0.1:/home/virtio_fs2_test_nfs on /home/virtio_fs2_test type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=127.0.0.1,local_lock=none,addr=127.0.0.1)

2. boot up guest with virtiofs
    -m 32768 \
    -object memory-backend-file,size=32G,mem-path=/dev/shm,share=yes,id=mem-mem1  \
    -smp 24,maxcpus=24,cores=12,threads=1,dies=1,sockets=2  \
    -numa node,memdev=mem-mem1,nodeid=0  \
    -chardev socket,id=char_virtiofs_fs1,path=/tmp/sock1 \
    -device vhost-user-fs-pci,id=vufs_virtiofs_fs1,chardev=char_virtiofs_fs1,tag=myfs1,queue-size=1024,bus=pcie.0,addr=0x3 \
    -chardev socket,id=char_virtiofs_fs2,path=/tmp/sock2 \
    -device vhost-user-fs-pci,id=vufs_virtiofs_fs2,chardev=char_virtiofs_fs2,tag=myfs2,queue-size=1024,bus=pcie.0,addr=0x4 \

3. prepare xfstest env
  # cd /home && rm -rf xfstests-dev && git clone https://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git

  # yum install -y git acl attr automake bc e2fsprogs fio gawk gcc libtool lvm2 make psmisc quota sed xfsdump xfsprogs libacl-devel libattr-devel libaio-devel libuuid-devel xfsprogs-devel python3 sqlite

  # cd /home/xfstests-dev/ && make -j && make install

  # export TEST_DEV=myfs1 && export TEST_DIR=/mnt/myfs1 && export SCRATCH_DEV=myfs2 && export SCRATCH_MNT=/mnt/myfs2 && export FSTYP=virtiofs && export FSX_AVOID="-E"

  #useradd fsgqa && useradd 123456-fsgqa && useradd fsgqa2

4. submit generic/006 test
date;./check -virtiofs generic/650 ;date


Actual results:
[root@bootp-73-75-110 xfstests-dev]# date;./check -virtiofs generic/650;date
Thu Oct 28 14:13:08 CST 2021
FSTYP         -- virtiofs
PLATFORM      -- Linux/x86_64 bootp-73-75-110 5.14.0-8.el9.x86_64 #1 SMP Tue Oct 19 03:58:57 EDT 2021
MKFS_OPTIONS  -- myfs2
MOUNT_OPTIONS -- -o context=system_u:object_r:root_t:s0 myfs2 /mnt/myfs2

generic/650	
- output mismatch (see /home/xfstests-dev/results//generic/650.out.bad)
    --- tests/generic/650.out	2021-10-27 11:50:52.593093368 +0800
    +++ /home/xfstests-dev/results//generic/650.out.bad	2021-10-28 14:54:09.317778271 +0800
    @@ -1,2 +1,1597 @@
     QA output created by 650
     Silence is golden.
    +rm: cannot remove '/mnt/myfs1/650/p0/d5/d17/d36/d93/d21/d122/d7db/dc68/d225': Directory not empty
    +rm: cannot remove '/mnt/myfs1/650/p0/d5/d17/d36/d93/d21/d122/d7db/dc68/d333/d89b/d1c8/d2cc/d494/daba': Directory not empty
    +rm: cannot remove '/mnt/myfs1/650/p0/d5/d17/d36/d93/d21/d122/d7db/dc68/d333/d89b/d1c8/.nfs000000003005d939002af3d4': Device or resource busy
    +rm: cannot remove '/mnt/myfs1/650/p0/d5/d17/d36/d93/d21/d122/d7db/dc68/d333/d89b/d8ab/dc4e': Directory not empty
    +rm: cannot remove '/mnt/myfs1/650/p0/d5/d17/d36/d93/d21/d122/d7db/dc68/d333/d89b/d8ab/d1457': Directory not empty
    ...
    (Run 'diff -u /home/xfstests-dev/tests/generic/650.out /home/xfstests-dev/results//generic/650.out.bad'  to see the entire diff)
Ran: generic/650
Failures: generic/650
Failed 1 of 1 tests
Thu Oct 28 14:54:09 CST 2021

Expected results:
pass this test case.

Additional info:

Comment 1 Klaus Heinrich Kiwi 2021-10-28 11:51:54 UTC
German,

 will 'soft-assign' this to you, but it's medium priority - so that you can take your time exploring and learning.

Hanna cc'ed fyi only

Comment 2 xiagao 2021-11-02 01:35:29 UTC
Also hit this issue on RHEL8.6.0(host + guest)
pkg:
qemu-kvm-6.1.0-4.module+el8.6.0+13039+4b81a1dc.x86_64
kernel-4.18.0-348.4.el8.x86_64(host and guest)

Comment 3 German Maglione 2021-11-11 13:36:15 UTC
I was not able to reproduce the error:

host: Fedora 34
    Linux fedora 5.14.13-200.fc34.x86_64 #1 SMP Mon Oct 18 12:39:31 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

guest: Fedora 35 Server
    Linux ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com 5.14.10-300.fc35.x86_64 #1 SMP Thu Oct 7 20:48:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

(I had to increase the limit of open files in both the guest and the host to avoid "Too many open files" error)

with both:
    QEMU emulator version 6.1.50 (v6.1.0-1605-gafc9fcde55)
    QEMU emulator version 5.2.0 (qemu-5.2.0-8.fc34)

qemu options:    
    -blockdev file,node-name=hdd,filename=fedora-srv.raw \
    -device virtio-blk,drive=hdd \
    -accel kvm -m 4G \
    -chardev socket,id=char_virtiofs_fs1,path=socket_path1 \
    -device vhost-user-fs-pci,id=vufs_virtiofs_fs1,chardev=char_virtiofs_fs1,tag=myfs1,queue-size=1024 \
    -chardev socket,id=char_virtiofs_fs2,path=socket_path2 \
    -device vhost-user-fs-pci,id=vufs_virtiofs_fs2,chardev=char_virtiofs_fs2,tag=myfs2,queue-size=1024 \
    -object memory-backend-file,id=mem,size=4G,mem-path=/dev/shm,share=on \
    -numa node,memdev=mem
    

using the virtiofsd-rs: https://gitlab.com/virtio-fs/virtiofsd-rs
with a combination of the following parameters (with and without):
    --cache always --xattr --announce-submounts --inode-file-handles 
    and both --sandbox namespace  and --sandbox none    

Results:
FSTYP         -- virtiofs
PLATFORM      -- Linux/x86_64 ibm-p8-kvm-03-guest-02 5.14.10-300.fc35.x86_64 #1 SMP Thu Oct 7 20:48:44 UTC 2021
MKFS_OPTIONS  -- myfs1
MOUNT_OPTIONS -- myfs1 /mnt/myfs1

generic/006 6s ...  6s
Ran: generic/006
Passed all 1 tests

(the times varies between 6 and 7s)

Xiagao, could you re-run your tests using virtiofsd-rs? https://gitlab.com/virtio-fs/virtiofsd-rs

Comment 7 Hanna Czenczek 2021-11-16 17:01:48 UTC
Hi,

I took a look and here’s what I found:

First, as shown by the `date` invocations, the reproducer takes very long.  In comment 6, it takes over one hour, in comment 0 it takes 40 minutes.  I find it’s reproducible much more quickly by drastically reducing the number of virtual cores from 24 to 2, by using `-smp 2,maxcpus=2,cores=1,threads=1,dies=1,sockets=2`.

Second, the reproducing steps in comment 0 say to run `./check -virtiofs generic/006` (step 4), but this BZ is actually about generic/650, as shown under “Actual results”.  Consequentially, German tested generic/006 in comment 3, but it should be generic/650.

Third, I can reproduce the problem (as stated above already with two CPUs on two sockets with one core each), and I believe it’s NFS-related.  I remember that NFS creates temporary lock files for each open FD, and these cannot be deleted, hence the failure output.  I’d have to dig what our approach was to this, but AFAIR we treated it mostly as an unfortunate but not directly fixable problem.

However, this problem can be circumvented by not opening FDs for lookups, and that’s precisely what the `--inode-file-handles` option of virtiofsd-rs is for.  With this option, the test does indeed pass for me.  It fails without this option.  I don’t know why the problem persists in comment 6 despite passing the option, but it’s notable that it’s a best-effort option: If file handles don’t work for a filesystem, then we fall back to using FDs for lookups.  I don’t know why file handles wouldn’t work here, though.  Perhaps it’s the underlying filesystem on the NFS host that doesn’t support file handles, but I was under the impression that NFS requires file handles – I may well be wrong, though.

Vivek once suggested that with `--inode-file-handles` we should verify whether the shared directory’s root filesystem supports file handles and return an error if it doesn’t.  I’m not sure I agree because it still wouldn’t fix the problem; virtiofsd-rs would just refuse to start up, and then you’d drop the option, and then the problem of these NFS hidden files would persist.

Hanna

Comment 8 xiagao 2021-11-17 02:51:15 UTC
(In reply to Hanna Reitz from comment #7)
> Hi,
> 
> I took a look and here’s what I found:
> 
> First, as shown by the `date` invocations, the reproducer takes very long. 
> In comment 6, it takes over one hour, in comment 0 it takes 40 minutes.  I
> find it’s reproducible much more quickly by drastically reducing the number
> of virtual cores from 24 to 2, by using `-smp
> 2,maxcpus=2,cores=1,threads=1,dies=1,sockets=2`.

Yes, generic/650 is testing vcpu hotplugging, so the running time depends on
the vcpu numbers.
This is the case description.
# Run an all-writes fsstress run with multiple threads while exercising CPU
# hotplugging to shake out bugs in the write path.


> 
> Second, the reproducing steps in comment 0 say to run `./check -virtiofs
> generic/006` (step 4), but this BZ is actually about generic/650, as shown
> under “Actual results”.  Consequentially, German tested generic/006 in
> comment 3, but it should be generic/650.
It's my fault, I'm very sorry to paste wrong test case id.It should be generic/650.

> 
> Third, I can reproduce the problem (as stated above already with two CPUs on
> two sockets with one core each), and I believe it’s NFS-related.  I remember
> that NFS creates temporary lock files for each open FD, and these cannot be
> deleted, hence the failure output.  I’d have to dig what our approach was to
> this, but AFAIR we treated it mostly as an unfortunate but not directly
> fixable problem.
> 
> However, this problem can be circumvented by not opening FDs for lookups,
> and that’s precisely what the `--inode-file-handles` option of virtiofsd-rs
> is for.  With this option, the test does indeed pass for me.  It fails
> without this option.  I don’t know why the problem persists in comment 6
> despite passing the option, but it’s notable that it’s a best-effort option:
> If file handles don’t work for a filesystem, then we fall back to using FDs
> for lookups.  I don’t know why file handles wouldn’t work here, though. 
> Perhaps it’s the underlying filesystem on the NFS host that doesn’t support
> file handles, but I was under the impression that NFS requires file handles
> – I may well be wrong, though.

I tested it again according to comment6,it passed.
[root@bootp-73-75-204 xfstests-dev]# date; ./check -virtiofs generic/650; date
Wed Nov 17 10:37:31 AM CST 2021
FSTYP         -- virtiofs
PLATFORM      -- Linux/x86_64 bootp-73-75-204 5.14.0-13.el9.x86_64 #1 SMP Mon Nov 8 20:07:06 EST 2021
MKFS_OPTIONS  -- myfs2
MOUNT_OPTIONS -- -o context=system_u:object_r:root_t:s0 myfs2 /mnt/myfs2

generic/650 192s ...  190s
Ran: generic/650
Passed all 1 tests

Wed Nov 17 10:40:41 AM CST 2021


Thanks.

> 
> Vivek once suggested that with `--inode-file-handles` we should verify
> whether the shared directory’s root filesystem supports file handles and
> return an error if it doesn’t.  I’m not sure I agree because it still
> wouldn’t fix the problem; virtiofsd-rs would just refuse to start up, and
> then you’d drop the option, and then the problem of these NFS hidden files
> would persist.
> 
> Hanna

Comment 9 German Maglione 2021-11-17 14:42:18 UTC
(In reply to Hanna Reitz from comment #7)
> Third, I can reproduce the problem (as stated above already with two CPUs on
> two sockets with one core each), and I believe it’s NFS-related.  I remember
> that NFS creates temporary lock files for each open FD, and these cannot be
> deleted, hence the failure output.  I’d have to dig what our approach was to
> this, but AFAIR we treated it mostly as an unfortunate but not directly
> fixable problem.

Yes, you are right (idk how I missed those .nfsxxxx files) is an issue with the NFS client.
This happens when an open file is unlinked, it is called "silly rename"
(I thought it was resolved in NFSv4).

Comment 10 Vivek Goyal 2022-01-04 20:21:42 UTC
Following is another open bug and I think it is similar. They all probably are related issues w.r.t NFS creating temporary .nfsxxxx files and unlink related races related to that.

https://bugzilla.redhat.com/show_bug.cgi?id=1908490

For the time being I think issue could be mitigated by using file handles (--inode-file-handles) in rust version of virtiofsd.

Longer term, we need to look into the idea of synchronous forget (I think david gilbert suggested that). Somebody needs to dive deeper and see if that is feasible or not.

Comment 11 Vivek Goyal 2022-03-16 12:20:28 UTC
Ideally, it would be nice if NFS can fix it and move away from these ".nfsxxxx" files. That will fix the issue.

Comment 12 xiagao 2022-09-08 07:09:06 UTC
Hi Vivek, do we have any plan for this bug?

Comment 13 xiagao 2022-09-08 09:07:40 UTC
Also I just test it with --inode-file-handles, the result is pass.

2022-09-08 03:53:38: generic/650
2022-09-08 04:36:30:  2572s

virtiofsd cmd:
/usr/libexec/virtiofsd --socket-path=/var/tmp/avocado_4psduw_6/avocado-vt-vm1-fs2-virtiofsd.sock -o source=/tmp/virtio_fs2_test --no-killpriv-v2 -o cache=auto --inode-file-handles=prefer.


virtiofsd version:
virtiofsd-1.4.0-1.el9.x86_64

Comment 14 Vivek Goyal 2022-09-20 19:37:58 UTC
(In reply to xiagao from comment #12)
> Hi Vivek, do we have any plan for this bug?

No updates on this. I don't think NFS has fixed this issue. So what we have currently is just the workaround of using --inode-file-handles.

BTW, I noticed that in another bug, it was claimed that tests passed on nfsv4 (but failed on nfsv3). Does that mean issue does not happen on nfsv4 and is limited to nfsv3 only? That kind of sounds strange.

Comment 15 xiagao 2022-09-21 07:39:00 UTC
(In reply to Vivek Goyal from comment #14)
> (In reply to xiagao from comment #12)
> > Hi Vivek, do we have any plan for this bug?
> 
> No updates on this. I don't think NFS has fixed this issue. So what we have
> currently is just the workaround of using --inode-file-handles.
Yes, I just have a test with and without "--inode-file-handles", the results show that it can pass with "--inode-file-handles" and will fail without this option.

> 
> BTW, I noticed that in another bug, it was claimed that tests passed on
> nfsv4 (but failed on nfsv3). Does that mean issue does not happen on nfsv4
> and is limited to nfsv3 only? That kind of sounds strange.
I thought you mean bz1908490.
From my test results recently, generic/245 always passed on RHEL9 on top of nfs version4.2,and I just try it on top of nfs version3, it also passed!

my nfs mount info:
127.0.0.1:/mnt/virtio_fs1_test_nfs on /tmp/virtio_fs1_test type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=127.0.0.1,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=127.0.0.1)
127.0.0.1:/mnt/virtio_fs2_test_nfs on /tmp/virtio_fs2_test type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=127.0.0.1,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=127.0.0.1)

xfstest generic/245 results: Pass

pkg:
KVM version: qemu-kvm-7.1.0-1.el9.x86_64
kernel version: 5.14.0-163.el9.x86_64(host && guest)

Comment 16 German Maglione 2022-09-28 13:05:58 UTC
(In reply to xiagao from comment #15)
> (In reply to Vivek Goyal from comment #14)
> > (In reply to xiagao from comment #12)
> > > Hi Vivek, do we have any plan for this bug?
> > 
> > No updates on this. I don't think NFS has fixed this issue. So what we have
> > currently is just the workaround of using --inode-file-handles.
> Yes, I just have a test with and without "--inode-file-handles", the results
> show that it can pass with "--inode-file-handles" and will fail without this
> option.
> 
> > 
> > BTW, I noticed that in another bug, it was claimed that tests passed on
> > nfsv4 (but failed on nfsv3). Does that mean issue does not happen on nfsv4
> > and is limited to nfsv3 only? That kind of sounds strange.

Indeed, since the silly rename is still present in the latest nfs.
The required option (OPEN4_RESULT_PRESERVE_UNLINKED) was added to the 4.1 nfs spec,
but currently it's not implemented. I check the kernel source code (and test just in case)


> I thought you mean bz1908490.
> From my test results recently, generic/245 always passed on RHEL9 on top of
> nfs version4.2,and I just try it on top of nfs version3, it also passed!
> 
> my nfs mount info:
> 127.0.0.1:/mnt/virtio_fs1_test_nfs on /tmp/virtio_fs1_test type nfs
> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,
> timeo=600,retrans=2,sec=sys,mountaddr=127.0.0.1,mountvers=3,mountport=20048,
> mountproto=udp,local_lock=none,addr=127.0.0.1)
> 127.0.0.1:/mnt/virtio_fs2_test_nfs on /tmp/virtio_fs2_test type nfs
> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,
> timeo=600,retrans=2,sec=sys,mountaddr=127.0.0.1,mountvers=3,mountport=20048,
> mountproto=udp,local_lock=none,addr=127.0.0.1)
> 
> xfstest generic/245 results: Pass
> 
> pkg:
> KVM version: qemu-kvm-7.1.0-1.el9.x86_64
> kernel version: 5.14.0-163.el9.x86_64(host && guest)

Comment 17 German Maglione 2022-10-11 09:28:31 UTC
This cannot be solved in virtiofsd, the solution is to:

- that the NFS server implements OPEN4_RESULT_PRESERVE_UNLINKED, or
  server-side silly rename.
  
- or add fuse support for synchronous forgets

 
The current workaround is to run virtiofsd with `--inode-file-handles=mandatory`,
but this has its own limitations, for instance, requiring `CAP_DAC_READ_SEARCH`.

I have opened an upstream issue to keep track of this bug.  
https://gitlab.com/virtio-fs/virtiofsd/-/issues/61  

More info about silly rename:
- [Server-side silly rename](https://linux-nfs.org/wiki/index.php/Server-side_silly_rename)
- [Linux NFS FAQ](https://nfs.sourceforge.net/)
  D2. What is a "silly rename"? Why do these .nfsXXXXX files keep showing up?


Note You need to log in before you can comment on or make changes to this bug.