Bug 2217658
| Summary: | NFSv4.0 client hangs when server reboot while client had outstanding lock request to the server | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | Olga Kornieskaia <kolga> | |
| Component: | kernel | Assignee: | Benjamin Coddington <bcodding> | |
| kernel sub component: | NFS | QA Contact: | Zhi Li <yieli> | |
| Status: | CLOSED ERRATA | Docs Contact: | ||
| Severity: | unspecified | |||
| Priority: | unspecified | CC: | ajmitchell, bcodding, dwysocha, jlayton, nfs-team, xzhou, yoyang | |
| Version: | 8.8 | Keywords: | Triaged, ZStream | |
| Target Milestone: | rc | Flags: | pm-rhel:
mirror+
|
|
| Target Release: | --- | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | kernel-4.18.0-507.el8 | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 2217659 2237840 (view as bug list) | Environment: | ||
| Last Closed: | 2023-11-14 15:45:12 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 2237840 | |||
Hi Olga, Could you give us a reproducer on these subjects, it is really hard for us to perform these steps based on the abstract description. Thanks a lot (In reply to Zhi Li from comment #1) > Hi Olga, > > Could you give us a reproducer on these subjects, it is really hard > for us to perform these steps based on the abstract description. > > Thanks a lot Client does mount -o vers=4.0 <server>:/<volume> /mnt flock -x /mnt/foobar sleep 60 Now you need a way on the server to reboot it before replying to the lock operation: option 1: hack the server to delay the lock so that a reboot can be forced option 2: use an nfs4proxy to delay the LOCK operation (either or from). Once a lock reply is delayed, reboot the server. Olga asks for z-stream for this in an email due to GA being unable to be qualified. I believe this patch is intended to fix the issue: https://lore.kernel.org/linux-nfs/374ab3fe691e938cda4e239748dcb6b743705a3f.1688221596.git.bcodding@redhat.com/T/#u (In reply to Jeff Layton from comment #4) > I believe this patch is intended to fix the issue: > > https://lore.kernel.org/linux-nfs/374ab3fe691e938cda4e239748dcb6b743705a3f. > 1688221596.git.bcodding/T/#u Yes! I am hoping we see that in a -fixes PR for v6.5-rc2, but if the week winds out without it I'll set up MRs with just the revert. VERIFIED this bug according to Comment#13 with sanityonly. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: kernel security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:7077 |
Description of problem: Netapp QA discovered hung clients during testing of NFSv4.0 mounts and server reboots on application load that opened, locked, and did IO to the files. Specific steps are 1. Client sent a LOCK request to the server but hasn't gotten reply yet 2. Server reboots 3. Once server starts responding with errors indicating that server rebooted (ie RENEW would get NFS4ERR_STALE_CLIENT) and then the LOCK would be resent and it gets NFS4ERR_STALE_STATEID Client then hangs. Speficically 2 threads (1 trying to do open state recovery trying to get a seqid lock but waiting forever and 2 the lock thread waiting for recovery and holding the seqid thread This is state of threads during the hang: Jun 23 14:08:48 localhost kernel: task:flock state:D stack:0 pid:3223 ppid:3116 flags:0x00000204 Jun 23 14:08:48 localhost kernel: Call trace: Jun 23 14:08:48 localhost kernel: __switch_to+0xc8/0x110 Jun 23 14:08:48 localhost kernel: __schedule+0x1d8/0x524 Jun 23 14:08:48 localhost kernel: schedule+0x60/0xfc Jun 23 14:08:48 localhost kernel: rpc_wait_bit_killable+0x1c/0x7c [sunrpc] Jun 23 14:08:48 localhost kernel: __wait_on_bit+0x54/0x190 Jun 23 14:08:48 localhost kernel: out_of_line_wait_on_bit+0x84/0xb0 Jun 23 14:08:48 localhost kernel: rpc_wait_for_completion_task+0x28/0x30 [sunrpc] Jun 23 14:08:48 localhost kernel: _nfs4_do_setlk+0x210/0x410 [nfsv4] Jun 23 14:08:48 localhost kernel: nfs4_proc_setlk+0xcc/0x170 [nfsv4] Jun 23 14:08:48 localhost kernel: nfs4_retry_setlk+0x188/0x1d0 [nfsv4] Jun 23 14:08:48 localhost kernel: nfs4_proc_lock+0xa4/0x1dc [nfsv4] Jun 23 14:08:48 localhost kernel: do_setlk+0x68/0xf0 [nfs] Jun 23 14:08:48 localhost kernel: nfs_flock+0x6c/0xb4 [nfs] Jun 23 14:08:48 localhost kernel: __do_sys_flock+0x108/0x1c0 Jun 23 14:08:48 localhost kernel: __arm64_sys_flock+0x20/0x30 Jun 23 14:08:48 localhost kernel: invoke_syscall.constprop.0+0x7c/0xd0 Jun 23 14:08:48 localhost kernel: el0_svc_common.constprop.0+0x144/0x160 Jun 23 14:08:48 localhost kernel: do_el0_svc+0x2c/0xc0 Jun 23 14:08:48 localhost kernel: el0_svc+0x3c/0x1a0 Jun 23 14:08:48 localhost kernel: el0t_64_sync_handler+0xb4/0x130 Jun 23 14:08:48 localhost kernel: el0t_64_sync+0x174/0x178 Jun 23 14:08:48 localhost kernel: task:192.168.1.106-m state:D stack:0 pid:3225 ppid:2 flags:0x00000208 Jun 23 14:08:48 localhost kernel: Call trace: Jun 23 14:08:48 localhost kernel: __switch_to+0xc8/0x110 Jun 23 14:08:48 localhost kernel: __schedule+0x1d8/0x524 Jun 23 14:08:48 localhost kernel: schedule+0x60/0xfc Jun 23 14:08:48 localhost kernel: rpc_wait_bit_killable+0x1c/0x7c [sunrpc] Jun 23 14:08:48 localhost kernel: __wait_on_bit+0x54/0x190 Jun 23 14:08:48 localhost kernel: out_of_line_wait_on_bit+0x84/0xb0 Jun 23 14:08:48 localhost kernel: rpc_wait_for_completion_task+0x28/0x30 [sunrpc] Jun 23 14:08:48 localhost kernel: nfs4_run_open_task+0x12c/0x1f0 [nfsv4] Jun 23 14:08:48 localhost kernel: nfs4_open_recover_helper.part.0+0xa0/0x13c [nfsv4] Jun 23 14:08:48 localhost kernel: nfs4_open_recover+0x34/0x130 [nfsv4] Jun 23 14:08:48 localhost kernel: nfs4_do_open_reclaim+0xf4/0x280 [nfsv4] Jun 23 14:08:48 localhost kernel: nfs4_open_reclaim+0x58/0xe0 [nfsv4] Jun 23 14:08:48 localhost kernel: __nfs4_reclaim_open_state+0x38/0x158 [nfsv4] Jun 23 14:08:48 localhost kernel: nfs4_reclaim_open_state+0x114/0x304 [nfsv4] un 23 14:08:48 localhost kernel: nfs4_reclaim_open_state+0x114/0x304 [nfsv4] Jun 23 14:08:48 localhost kernel: nfs4_do_reclaim+0x150/0x25c [nfsv4] Jun 23 14:08:48 localhost kernel: nfs4_state_manager+0x550/0x884 [nfsv4] Jun 23 14:08:48 localhost kernel: nfs4_run_state_manager+0xa4/0x1c0 [nfsv4] Jun 23 14:08:48 localhost kernel: kthread+0xf0/0xf4 Jun 23 14:08:48 localhost kernel: ret_from_fork+0x10/0x20 The problem was introduce by the following commit: commit f5ea16137a3fa2858620dc9084466491c128535f "NFSv4: Retry LOCK on OLD_STATEID during delegation return" The solution to the problem is to revert that commit. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: