Bug 2217658 - NFSv4.0 client hangs when server reboot while client had outstanding lock request to the server
Summary: NFSv4.0 client hangs when server reboot while client had outstanding lock req...
Keywords:
Status: VERIFIED
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: kernel
Version: 8.8
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Benjamin Coddington
QA Contact: Zhi Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-06-26 21:08 UTC by Olga Kornieskaia
Modified: 2023-08-10 09:48 UTC (History)
7 users (show)

Fixed In Version: kernel-4.18.0-507.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2217659 (view as bug list)
Environment:
Last Closed:
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gitlab redhat/rhel/src/kernel rhel-8 merge_requests 5114 0 None None None 2023-07-28 11:23:06 UTC
Red Hat Issue Tracker RHELPLAN-160838 0 None None None 2023-06-26 21:09:08 UTC

Description Olga Kornieskaia 2023-06-26 21:08:33 UTC
Description of problem:

Netapp QA discovered hung clients during testing of NFSv4.0 mounts and server reboots on application load that opened, locked, and did IO to the files.

Specific steps are
1. Client sent a LOCK request to the server but hasn't gotten reply yet
2. Server reboots
3. Once server starts responding with errors indicating that server rebooted (ie RENEW would get NFS4ERR_STALE_CLIENT) and then the LOCK would be resent and it gets NFS4ERR_STALE_STATEID

Client then hangs. Speficically 2 threads (1 trying to do open state recovery trying to get a seqid lock but waiting forever and 2 the lock thread waiting for recovery and holding the seqid thread

This is state of threads during the hang:
Jun 23 14:08:48 localhost kernel: task:flock           state:D stack:0
    pid:3223  ppid:3116   flags:0x00000204
Jun 23 14:08:48 localhost kernel: Call trace:
Jun 23 14:08:48 localhost kernel: __switch_to+0xc8/0x110
Jun 23 14:08:48 localhost kernel: __schedule+0x1d8/0x524
Jun 23 14:08:48 localhost kernel: schedule+0x60/0xfc
Jun 23 14:08:48 localhost kernel: rpc_wait_bit_killable+0x1c/0x7c [sunrpc]
Jun 23 14:08:48 localhost kernel: __wait_on_bit+0x54/0x190
Jun 23 14:08:48 localhost kernel: out_of_line_wait_on_bit+0x84/0xb0
Jun 23 14:08:48 localhost kernel:
rpc_wait_for_completion_task+0x28/0x30 [sunrpc]
Jun 23 14:08:48 localhost kernel: _nfs4_do_setlk+0x210/0x410 [nfsv4]
Jun 23 14:08:48 localhost kernel: nfs4_proc_setlk+0xcc/0x170 [nfsv4]
Jun 23 14:08:48 localhost kernel: nfs4_retry_setlk+0x188/0x1d0 [nfsv4]
Jun 23 14:08:48 localhost kernel: nfs4_proc_lock+0xa4/0x1dc [nfsv4]
Jun 23 14:08:48 localhost kernel: do_setlk+0x68/0xf0 [nfs]
Jun 23 14:08:48 localhost kernel: nfs_flock+0x6c/0xb4 [nfs]
Jun 23 14:08:48 localhost kernel: __do_sys_flock+0x108/0x1c0
Jun 23 14:08:48 localhost kernel: __arm64_sys_flock+0x20/0x30
Jun 23 14:08:48 localhost kernel: invoke_syscall.constprop.0+0x7c/0xd0
Jun 23 14:08:48 localhost kernel: el0_svc_common.constprop.0+0x144/0x160
Jun 23 14:08:48 localhost kernel: do_el0_svc+0x2c/0xc0
Jun 23 14:08:48 localhost kernel: el0_svc+0x3c/0x1a0
Jun 23 14:08:48 localhost kernel: el0t_64_sync_handler+0xb4/0x130
Jun 23 14:08:48 localhost kernel: el0t_64_sync+0x174/0x178
Jun 23 14:08:48 localhost kernel: task:192.168.1.106-m state:D stack:0
    pid:3225  ppid:2      flags:0x00000208
Jun 23 14:08:48 localhost kernel: Call trace:
Jun 23 14:08:48 localhost kernel: __switch_to+0xc8/0x110
Jun 23 14:08:48 localhost kernel: __schedule+0x1d8/0x524
Jun 23 14:08:48 localhost kernel: schedule+0x60/0xfc
Jun 23 14:08:48 localhost kernel: rpc_wait_bit_killable+0x1c/0x7c [sunrpc]
Jun 23 14:08:48 localhost kernel: __wait_on_bit+0x54/0x190
Jun 23 14:08:48 localhost kernel: out_of_line_wait_on_bit+0x84/0xb0
Jun 23 14:08:48 localhost kernel:
rpc_wait_for_completion_task+0x28/0x30 [sunrpc]
Jun 23 14:08:48 localhost kernel: nfs4_run_open_task+0x12c/0x1f0 [nfsv4]
Jun 23 14:08:48 localhost kernel:
nfs4_open_recover_helper.part.0+0xa0/0x13c [nfsv4]
Jun 23 14:08:48 localhost kernel: nfs4_open_recover+0x34/0x130 [nfsv4]
Jun 23 14:08:48 localhost kernel: nfs4_do_open_reclaim+0xf4/0x280 [nfsv4]
Jun 23 14:08:48 localhost kernel: nfs4_open_reclaim+0x58/0xe0 [nfsv4]
Jun 23 14:08:48 localhost kernel: __nfs4_reclaim_open_state+0x38/0x158 [nfsv4]
Jun 23 14:08:48 localhost kernel: nfs4_reclaim_open_state+0x114/0x304 [nfsv4]
un 23 14:08:48 localhost kernel: nfs4_reclaim_open_state+0x114/0x304 [nfsv4]
Jun 23 14:08:48 localhost kernel: nfs4_do_reclaim+0x150/0x25c [nfsv4]
Jun 23 14:08:48 localhost kernel: nfs4_state_manager+0x550/0x884 [nfsv4]
Jun 23 14:08:48 localhost kernel: nfs4_run_state_manager+0xa4/0x1c0 [nfsv4]
Jun 23 14:08:48 localhost kernel: kthread+0xf0/0xf4
Jun 23 14:08:48 localhost kernel: ret_from_fork+0x10/0x20

The problem was introduce by the following commit:
commit f5ea16137a3fa2858620dc9084466491c128535f "NFSv4:
Retry LOCK on OLD_STATEID during delegation return"

The solution to the problem is to revert that commit.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Zhi Li 2023-07-03 03:02:52 UTC
Hi Olga,

Could you give us a reproducer on these subjects, it is really hard
for us to perform these steps based on the abstract description.

Thanks a lot

Comment 2 Olga Kornieskaia 2023-07-03 13:51:37 UTC
(In reply to Zhi Li from comment #1)
> Hi Olga,
> 
> Could you give us a reproducer on these subjects, it is really hard
> for us to perform these steps based on the abstract description.
> 
> Thanks a lot

Client does
mount -o vers=4.0 <server>:/<volume> /mnt
flock -x /mnt/foobar sleep 60

Now you need a way on the server to reboot it before replying to the lock operation:
option 1: hack the server to delay the lock so that a reboot can be forced
option 2: use an nfs4proxy to delay the LOCK operation (either or from).
Once a lock reply is delayed, reboot the server.

Comment 3 Dave Wysochanski 2023-07-03 14:56:25 UTC
Olga asks for z-stream for this in an email due to GA being unable to be qualified.

Comment 4 Jeff Layton 2023-07-05 15:15:41 UTC
I believe this patch is intended to fix the issue:

https://lore.kernel.org/linux-nfs/374ab3fe691e938cda4e239748dcb6b743705a3f.1688221596.git.bcodding@redhat.com/T/#u

Comment 5 Benjamin Coddington 2023-07-11 15:29:27 UTC
(In reply to Jeff Layton from comment #4)
> I believe this patch is intended to fix the issue:
> 
> https://lore.kernel.org/linux-nfs/374ab3fe691e938cda4e239748dcb6b743705a3f.
> 1688221596.git.bcodding/T/#u

Yes!  I am hoping we see that in a -fixes PR for v6.5-rc2, but if the week winds out without it I'll set up MRs with just the revert.

Comment 14 Zhi Li 2023-08-10 09:48:36 UTC
VERIFIED this bug according to Comment#13 with sanityonly.


Note You need to log in before you can comment on or make changes to this bug.