Bug 2080590

Summary: NFSv3 remote mounts fail with "Mount process finished, but there is no mount." when a daemon-reload happens while mounting the remote mount
Product: Red Hat Enterprise Linux 8 Reporter: Renaud Métrich <rmetrich>
Component: systemdAssignee: systemd maint <systemd-maint>
Status: NEW --- QA Contact: Frantisek Sumsal <fsumsal>
Severity: high Docs Contact:
Priority: high    
Version: 8.5CC: dbodnarc, dtardon, fj-lsoft-rh-dump, mknutson, rwright, systemd-maint-list, trinh.dao
Target Milestone: rcKeywords: Reopened
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-05-02 05:27:41 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2099869, 1733835, 1780017, 2024217    
Attachments:
Description Flags
Strace of the "mount /mnt/1" command (successful)
none
Strace of systemd showing /proc/1/mountinfo being read but mount state failing anyway none

Description Renaud Métrich 2022-04-30 10:22:27 UTC
Description of problem:

When a daemon-reload happens while systemd is mounting NFSv3 mounts, the mounts are performed but systemd units can fail with the following error:
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
Apr 30 11:32:33 vm-remotemounts8 systemd[1]: Mounting /mnt/1...
Apr 30 11:32:36 vm-remotemounts8 systemd[1]: mnt-1.mount: Mount process finished, but there is no mount.
Apr 30 11:32:36 vm-remotemounts8 systemd[1]: mnt-1.mount: Failed with result 'protocol'.
Apr 30 11:32:36 vm-remotemounts8 systemd[1]: Failed to mount /mnt/1.
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

Reading the code and analyzing the corresponding systemd strace, it looks like the mount process exited but internally the mount state was not correct, even though the mount was effectively seen in /proc/1/mountinfo so correct state should be there:
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
1236 static void mount_sigchld_event(Unit *u, pid_t pid, int code, int status) {
 :
1260         (void) mount_process_proc_self_mountinfo(u->manager);
 :
1290         /* Note that due to the io event priority logic, we can be sure the new mountinfo is loaded
1291          * before we process the SIGCHLD for the mount command. */
1292 
1293         switch (m->state) {
1294 
1295         case MOUNT_MOUNTING:
1296                 /* Our mount point has not appeared in mountinfo.  Something went wrong. */
1297  ---> WE DIE HERE
1298                 if (f == MOUNT_SUCCESS) {
1299                         /* Either /bin/mount has an unexpected definition of success,
1300                          * or someone raced us and we lost. */
1301                         log_unit_warning(UNIT(m), "Mount process finished, but there is no mount.");
1302                         f = MOUNT_FAILURE_PROTOCOL;
1303                 }
1304                 mount_enter_dead(m, f);
1305                 break;
1306 
1307         case MOUNT_MOUNTING_DONE:
1308                 mount_enter_mounted(m, f);
1309                 break;
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

The strace shows the reload, then /proc/1/mountinfo showing /mnt/1, then the wait on the mount process (PID 1133):
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
(LINE NUMBER) (PID) (TIME + SYSCALL)
71183 1     11:32:35.075694 sendmsg(45<UNIX:[20271->11750]>, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="PRIORITY=6\nSYSLOG_FACILITY=3\nCODE_FILE=../src/core/main.c\nCODE_LINE=1677\nCODE_FUNC=invoke_main_loop\nSYSLOG_IDENTIFIER=sy       stemd\n", iov_len=126}, {iov_base="MESSAGE=", iov_len=8}, {iov_base="Reloading.", iov_len=10}, {iov_base="\n", iov_len=1}], msg_iovlen=4, msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL <unfinished ...>
 :
92178 1     11:32:36.889019 openat(AT_FDCWD, "/proc/self/mountinfo", O_RDONLY|O_CLOEXEC <unfinished ...>
 :
92187 1     11:32:36.889841 <... read resumed>"357 97 0:47 / /mnt/4 rw,relatime shared:194 - nfs vm-nfs7-server:/mnt/4 rw,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.122.186,mou      ntvers=3,mountport=20048,mountproto=tcp,local_lock=none,addr=192.168.122.186\n366 97 0:47 / /mnt/3 rw,relatime shared:199 - nfs vm-nfs7-server:/mnt/3 rw,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,      sec=sys,mountaddr=192.168.122.186,mountvers=3,mountport=20048,mountproto=tcp,local_lock=none,addr=192.168.122.186\n375 97 0:47 / /mnt/7 rw,relatime shared:204 - nfs vm-nfs7-server:/mnt/7 rw,vers=3,rsize=131072,wsize=131072,namlen=25      5,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.122.186,mountvers=3,mountport=20048,mountproto=tcp,local_lock=none,addr=192.168.122.186\n384 97 0:47 / /mnt/1 rw,relatime shared:209 - nfs vm-nfs7-server:/mnt/1 rw,vers=      3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.122.186,mountvers=3,mountport=20048,moun"..., 12534) = 1876 <0.000046>

--> HERE ABOVE WE HAVE "/mnt/1" mount as well in /run/mount/utab:

 :
92206 1     11:32:36.890567 read(24</run/mount/utab>, "SRC=vm-nfs7-server:/mnt/8 TARGET=/mnt/8 ROOT=/ ATTRS=hard,nfsvers=3,proto=tcp,addr=192.168.122.186 \nSRC=vm-nfs7-server:/mnt/5 TARGET=/mnt/5 ROOT=/ ATTRS=hard,nfsvers=3,proto=tcp,add       r=192.168.122.186 \nSRC=vm-nfs7-server:/mnt/4 TARGET=/mnt/4 ROOT=/ ATTRS=hard,nfsvers=3,proto=tcp,addr=192.168.122.186 \nSRC=vm-nfs7-server:/mnt/3 TARGET=/mnt/3 ROOT=/ ATTRS=hard,nfsvers=3,proto=tcp,addr=192.168.122.186 \nSRC=vm-nf       s7-server:/mnt/7 TARGET=/mnt/7 ROOT=/ ATTRS=hard,nfsvers=3,proto=tcp,addr=192.168.122.186 \nSRC=vm-nfs7-server:/mnt/1 TARGET=/mnt/1 ROOT=/ ATTRS=hard,nfsvers=3,proto=tcp,addr=192.168.122.186 \nSRC=vm-nfs7-server:/mnt/6 TARGET=/mnt/       6 ROOT=/ ATTRS=hard,nfsvers=3,proto=tcp,addr=192.168.122.186 \nSRC=vm-nfs7-server:/mnt/9 TARGET=/mnt/9 ROOT=/ ATTRS=hard,nfsvers=3,proto=tcp,addr=192.168.122.186 \nSRC=vm-nfs7-server:/mnt/2 TARGET=/mnt/2 ROOT=/ ATTRS=hard,nfsvers=3       ,proto=tcp,addr=192.168.122.186 \n", 4096) = 900 <0.000008>
 :

92367 1     11:32:36.907043 waitid(P_ALL, 0,  <unfinished ...>
92368 1     11:32:36.907104 <... waitid resumed>{si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=1133, si_uid=0, si_status=0, si_utime=0, si_stime=0}, WNOHANG|WEXITED|WNOWAIT, NULL) = 0 <0.000055>
 :

92389 1     11:32:36.909425 sendmsg(45<UNIX:[20271->11750]>, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="PRIORITY=4\nSYSLOG_FACILITY=3\nCODE_FILE=../src/core/unit.c\nCODE_LINE=5552\nCODE_FUNC=unit_log_failure\nSYSLOG_IDENTIFIER=sys      temd\n", iov_len=126}, {iov_base="MESSAGE_ID=d9b373ed55a64feb8242e02dbe79a49c", iov_len=43}, {iov_base="\n", iov_len=1}, {iov_base="UNIT=mnt-1.mount", iov_len=16}, {iov_base="\n", iov_len=1}, {iov_base="INVOCATION_ID=72dd55ec127f4f9      6a97d6d5549223188", iov_len=46}, {iov_base="\n", iov_len=1}, {iov_base="MESSAGE=mnt-1.mount: Failed with result 'protocol'.", iov_len=51}, {iov_base="\n", iov_len=1}, {iov_base="UNIT_RESULT=protocol", iov_len=20}, {iov_base="\n", io      v_len=1}], msg_iovlen=11, msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL <unfinished ...>
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

Version-Release number of selected component (if applicable):

All systemd-239 including systemd-239-51.el8_5.5.x86_64

How reproducible:

Always

Steps to Reproduce:
1. Setup 9 NFSv3 mounts in /etc/fstab

-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
vm-nfs7-server:/mnt/1	/mnt/1	nfs	rw,hard,nfsvers=3,proto=tcp	0	0
vm-nfs7-server:/mnt/2	/mnt/2	nfs	rw,hard,nfsvers=3,proto=tcp	0	0
vm-nfs7-server:/mnt/3	/mnt/3	nfs	rw,hard,nfsvers=3,proto=tcp	0	0
vm-nfs7-server:/mnt/4	/mnt/4	nfs	rw,hard,nfsvers=3,proto=tcp	0	0
vm-nfs7-server:/mnt/5	/mnt/5	nfs	rw,hard,nfsvers=3,proto=tcp	0	0
vm-nfs7-server:/mnt/6	/mnt/6	nfs	rw,hard,nfsvers=3,proto=tcp	0	0
vm-nfs7-server:/mnt/7	/mnt/7	nfs	rw,hard,nfsvers=3,proto=tcp	0	0
vm-nfs7-server:/mnt/8	/mnt/8	nfs	rw,hard,nfsvers=3,proto=tcp	0	0
vm-nfs7-server:/mnt/9	/mnt/9	nfs	rw,hard,nfsvers=3,proto=tcp	0	0
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

2. Create/enable a service reloading systemd just after rpc-statd.service finished (/etc/systemd/system/reload-systemd.service)

-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
[Unit]
After=network-online.target

[Service]
Type=oneshot
ExecStart=/bin/sh -c '/bin/sleep 0.4; /bin/systemctl daemon-reload'

[Install]
WantedBy=multi-user.target
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

# systemctl daemon-reload
# systemctl enable reload-systemd

3. Reboot and check the journal to "tune" the delay before executing the reload

The delay must be adjusted for the reload to execute just after rpc-statd.service finished, e.g.!

-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
# journalctl -b -o short-precise | egrep "(/mnt/|Reloading|reload-systemd|status monitor|protocol)"
[...]
Apr 30 12:16:54.080946 vm-remotemounts8 systemd[1]: Starting reload-systemd.service...
Apr 30 12:16:54.096402 vm-remotemounts8 systemd[1]: Mounting /mnt/4...
Apr 30 12:16:54.126040 vm-remotemounts8 systemd[1]: Mounting /mnt/5...
Apr 30 12:16:54.128793 vm-remotemounts8 systemd[1]: Mounting /mnt/9...
Apr 30 12:16:54.157714 vm-remotemounts8 systemd[1]: Mounting /mnt/6...
Apr 30 12:16:54.232126 vm-remotemounts8 systemd[1]: Starting NFS status monitor for NFSv2/3 locking....
Apr 30 12:16:54.346662 vm-remotemounts8 systemd[1]: Started NFS status monitor for NFSv2/3 locking..

---> Proper timing (0.4 seconds above)

Apr 30 12:16:54.576942 vm-remotemounts8 systemd[1]: Reloading.
Apr 30 12:16:54.838234 vm-remotemounts8 systemd[1]: mnt-2.mount: Failed with result 'protocol'.
Apr 30 12:16:54.838448 vm-remotemounts8 systemd[1]: Failed to mount /mnt/2.
[...]
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

Actual results:

Failure of some mount units (but mounts performed anyway)

Expected results:

No failure

Additional info:

In order to reproduce while stracing systemd, the service needs to be adjusted to execute just after rpc-statd.service finished:
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
[Unit]
DefaultDependencies=no
Conflicts=umount.target
After=rpc-statd.service
Requires=rpc-statd.service

[Service]
Type=oneshot
ExecStart=/bin/sh -c '/bin/systemctl daemon-reload'

[Install]
WantedBy=multi-user.target
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

Stracing systemd early during boot using a generator (/usr/lib/systemd/system-generators/strace-systemd):
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
#!/bin/sh
nohup timeout 3m strace -fttTvyy -s 1024 -o /run/systemd.strace.$(date +%s) -p 1 &
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

# chmod +x /usr/lib/systemd/system-generators/strace-systemd

Comment 1 Renaud Métrich 2022-04-30 10:24:57 UTC
Created attachment 1876158 [details]
Strace of the "mount /mnt/1" command (successful)

Comment 2 Renaud Métrich 2022-04-30 10:25:40 UTC
Created attachment 1876159 [details]
Strace of systemd showing /proc/1/mountinfo being read but mount state failing anyway

Comment 3 Renaud Métrich 2022-04-30 11:14:23 UTC
Note: I don't think it's effectively related to NFSv3 only, I would expect this can happen with other mounts as well.
It's just the reproducer that uses NFSv3.

Comment 4 David Tardon 2022-05-02 05:27:41 UTC

*** This bug has been marked as a duplicate of bug 1777530 ***

Comment 9 David Tardon 2022-11-15 12:52:04 UTC
*** Bug 1778980 has been marked as a duplicate of this bug. ***

Comment 10 David Tardon 2022-11-15 12:53:49 UTC
There's a reproducer that doesn't require NFS in bug 1778980 comment 1.

Comment 11 David Tardon 2022-11-15 12:54:42 UTC
(In reply to David Tardon from comment #10)
> There's a reproducer that doesn't require NFS in bug 1778980 comment 1.

... bug 1778980 comment 0.