Bug 2217659 - NFSv4.0 client hangs when server reboot while client had outstanding lock request to the server
Summary: NFSv4.0 client hangs when server reboot while client had outstanding lock req...
Keywords:
Status: VERIFIED
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: kernel
Version: 9.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Benjamin Coddington
QA Contact: Zhi Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-06-26 21:09 UTC by Olga Kornieskaia
Modified: 2023-08-14 03:59 UTC (History)
6 users (show)

Fixed In Version: kernel-5.14.0-349.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 2217658
Environment:
Last Closed:
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gitlab redhat/centos-stream/src/kernel centos-stream-9 merge_requests 2862 0 None opened Revert "NFSv4: Retry LOCK on OLD_STATEID during delegation return" 2023-07-28 11:27:25 UTC
Red Hat Issue Tracker RHELPLAN-160839 0 None None None 2023-06-26 21:12:50 UTC

Description Olga Kornieskaia 2023-06-26 21:09:29 UTC
+++ This bug was initially created as a clone of Bug #2217658 +++

Description of problem:

Netapp QA discovered hung clients during testing of NFSv4.0 mounts and server reboots on application load that opened, locked, and did IO to the files.

Specific steps are
1. Client sent a LOCK request to the server but hasn't gotten reply yet
2. Server reboots
3. Once server starts responding with errors indicating that server rebooted (ie RENEW would get NFS4ERR_STALE_CLIENT) and then the LOCK would be resent and it gets NFS4ERR_STALE_STATEID

Client then hangs. Speficically 2 threads (1 trying to do open state recovery trying to get a seqid lock but waiting forever and 2 the lock thread waiting for recovery and holding the seqid thread

This is state of threads during the hang:
Jun 23 14:08:48 localhost kernel: task:flock           state:D stack:0
    pid:3223  ppid:3116   flags:0x00000204
Jun 23 14:08:48 localhost kernel: Call trace:
Jun 23 14:08:48 localhost kernel: __switch_to+0xc8/0x110
Jun 23 14:08:48 localhost kernel: __schedule+0x1d8/0x524
Jun 23 14:08:48 localhost kernel: schedule+0x60/0xfc
Jun 23 14:08:48 localhost kernel: rpc_wait_bit_killable+0x1c/0x7c [sunrpc]
Jun 23 14:08:48 localhost kernel: __wait_on_bit+0x54/0x190
Jun 23 14:08:48 localhost kernel: out_of_line_wait_on_bit+0x84/0xb0
Jun 23 14:08:48 localhost kernel:
rpc_wait_for_completion_task+0x28/0x30 [sunrpc]
Jun 23 14:08:48 localhost kernel: _nfs4_do_setlk+0x210/0x410 [nfsv4]
Jun 23 14:08:48 localhost kernel: nfs4_proc_setlk+0xcc/0x170 [nfsv4]
Jun 23 14:08:48 localhost kernel: nfs4_retry_setlk+0x188/0x1d0 [nfsv4]
Jun 23 14:08:48 localhost kernel: nfs4_proc_lock+0xa4/0x1dc [nfsv4]
Jun 23 14:08:48 localhost kernel: do_setlk+0x68/0xf0 [nfs]
Jun 23 14:08:48 localhost kernel: nfs_flock+0x6c/0xb4 [nfs]
Jun 23 14:08:48 localhost kernel: __do_sys_flock+0x108/0x1c0
Jun 23 14:08:48 localhost kernel: __arm64_sys_flock+0x20/0x30
Jun 23 14:08:48 localhost kernel: invoke_syscall.constprop.0+0x7c/0xd0
Jun 23 14:08:48 localhost kernel: el0_svc_common.constprop.0+0x144/0x160
Jun 23 14:08:48 localhost kernel: do_el0_svc+0x2c/0xc0
Jun 23 14:08:48 localhost kernel: el0_svc+0x3c/0x1a0
Jun 23 14:08:48 localhost kernel: el0t_64_sync_handler+0xb4/0x130
Jun 23 14:08:48 localhost kernel: el0t_64_sync+0x174/0x178
Jun 23 14:08:48 localhost kernel: task:192.168.1.106-m state:D stack:0
    pid:3225  ppid:2      flags:0x00000208
Jun 23 14:08:48 localhost kernel: Call trace:
Jun 23 14:08:48 localhost kernel: __switch_to+0xc8/0x110
Jun 23 14:08:48 localhost kernel: __schedule+0x1d8/0x524
Jun 23 14:08:48 localhost kernel: schedule+0x60/0xfc
Jun 23 14:08:48 localhost kernel: rpc_wait_bit_killable+0x1c/0x7c [sunrpc]
Jun 23 14:08:48 localhost kernel: __wait_on_bit+0x54/0x190
Jun 23 14:08:48 localhost kernel: out_of_line_wait_on_bit+0x84/0xb0
Jun 23 14:08:48 localhost kernel:
rpc_wait_for_completion_task+0x28/0x30 [sunrpc]
Jun 23 14:08:48 localhost kernel: nfs4_run_open_task+0x12c/0x1f0 [nfsv4]
Jun 23 14:08:48 localhost kernel:
nfs4_open_recover_helper.part.0+0xa0/0x13c [nfsv4]
Jun 23 14:08:48 localhost kernel: nfs4_open_recover+0x34/0x130 [nfsv4]
Jun 23 14:08:48 localhost kernel: nfs4_do_open_reclaim+0xf4/0x280 [nfsv4]
Jun 23 14:08:48 localhost kernel: nfs4_open_reclaim+0x58/0xe0 [nfsv4]
Jun 23 14:08:48 localhost kernel: __nfs4_reclaim_open_state+0x38/0x158 [nfsv4]
Jun 23 14:08:48 localhost kernel: nfs4_reclaim_open_state+0x114/0x304 [nfsv4]
un 23 14:08:48 localhost kernel: nfs4_reclaim_open_state+0x114/0x304 [nfsv4]
Jun 23 14:08:48 localhost kernel: nfs4_do_reclaim+0x150/0x25c [nfsv4]
Jun 23 14:08:48 localhost kernel: nfs4_state_manager+0x550/0x884 [nfsv4]
Jun 23 14:08:48 localhost kernel: nfs4_run_state_manager+0xa4/0x1c0 [nfsv4]
Jun 23 14:08:48 localhost kernel: kthread+0xf0/0xf4
Jun 23 14:08:48 localhost kernel: ret_from_fork+0x10/0x20

The problem was introduce by the following commit:
commit f5ea16137a3fa2858620dc9084466491c128535f "NFSv4:
Retry LOCK on OLD_STATEID during delegation return"

The solution to the problem is to revert that commit.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Dave Wysochanski 2023-07-03 14:56:32 UTC
Olga asks for z-stream for this in an email due to GA being unable to be qualified.

Comment 14 Zhi Li 2023-08-14 03:59:26 UTC
Moving to VERIFIED according to comment#13 with sanityonly.


Note You need to log in before you can comment on or make changes to this bug.