Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 2080590

Summary: NFSv3 remote mounts fail with "Mount process finished, but there is no mount." when a daemon-reload happens while mounting the remote mount
Product: Red Hat Enterprise Linux 8 Reporter: Renaud Métrich <rmetrich>
Component: systemdAssignee: systemd maint <systemd-maint>
Status: CLOSED MIGRATED QA Contact: Frantisek Sumsal <fsumsal>
Severity: high Docs Contact:
Priority: high    
Version: 8.5CC: 248877168, chorn, darrin, dbodnarc, dtardon, feyang, fj-lsoft-rh-dump, mknutson, rwright, systemd-maint-list, trinh.dao
Target Milestone: rcKeywords: MigratedToJIRA, Reopened
Target Release: ---Flags: pm-rhel: mirror+
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-09-21 11:34:51 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1733835, 1780017, 2024217, 2099869    
Attachments:
Description Flags
Strace of the "mount /mnt/1" command (successful)
none
Strace of systemd showing /proc/1/mountinfo being read but mount state failing anyway none

Description Renaud Métrich 2022-04-30 10:22:27 UTC
Description of problem:

When a daemon-reload happens while systemd is mounting NFSv3 mounts, the mounts are performed but systemd units can fail with the following error:
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
Apr 30 11:32:33 vm-remotemounts8 systemd[1]: Mounting /mnt/1...
Apr 30 11:32:36 vm-remotemounts8 systemd[1]: mnt-1.mount: Mount process finished, but there is no mount.
Apr 30 11:32:36 vm-remotemounts8 systemd[1]: mnt-1.mount: Failed with result 'protocol'.
Apr 30 11:32:36 vm-remotemounts8 systemd[1]: Failed to mount /mnt/1.
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

Reading the code and analyzing the corresponding systemd strace, it looks like the mount process exited but internally the mount state was not correct, even though the mount was effectively seen in /proc/1/mountinfo so correct state should be there:
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
1236 static void mount_sigchld_event(Unit *u, pid_t pid, int code, int status) {
 :
1260         (void) mount_process_proc_self_mountinfo(u->manager);
 :
1290         /* Note that due to the io event priority logic, we can be sure the new mountinfo is loaded
1291          * before we process the SIGCHLD for the mount command. */
1292 
1293         switch (m->state) {
1294 
1295         case MOUNT_MOUNTING:
1296                 /* Our mount point has not appeared in mountinfo.  Something went wrong. */
1297  ---> WE DIE HERE
1298                 if (f == MOUNT_SUCCESS) {
1299                         /* Either /bin/mount has an unexpected definition of success,
1300                          * or someone raced us and we lost. */
1301                         log_unit_warning(UNIT(m), "Mount process finished, but there is no mount.");
1302                         f = MOUNT_FAILURE_PROTOCOL;
1303                 }
1304                 mount_enter_dead(m, f);
1305                 break;
1306 
1307         case MOUNT_MOUNTING_DONE:
1308                 mount_enter_mounted(m, f);
1309                 break;
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

The strace shows the reload, then /proc/1/mountinfo showing /mnt/1, then the wait on the mount process (PID 1133):
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
(LINE NUMBER) (PID) (TIME + SYSCALL)
71183 1     11:32:35.075694 sendmsg(45<UNIX:[20271->11750]>, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="PRIORITY=6\nSYSLOG_FACILITY=3\nCODE_FILE=../src/core/main.c\nCODE_LINE=1677\nCODE_FUNC=invoke_main_loop\nSYSLOG_IDENTIFIER=sy       stemd\n", iov_len=126}, {iov_base="MESSAGE=", iov_len=8}, {iov_base="Reloading.", iov_len=10}, {iov_base="\n", iov_len=1}], msg_iovlen=4, msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL <unfinished ...>
 :
92178 1     11:32:36.889019 openat(AT_FDCWD, "/proc/self/mountinfo", O_RDONLY|O_CLOEXEC <unfinished ...>
 :
92187 1     11:32:36.889841 <... read resumed>"357 97 0:47 / /mnt/4 rw,relatime shared:194 - nfs vm-nfs7-server:/mnt/4 rw,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.122.186,mou      ntvers=3,mountport=20048,mountproto=tcp,local_lock=none,addr=192.168.122.186\n366 97 0:47 / /mnt/3 rw,relatime shared:199 - nfs vm-nfs7-server:/mnt/3 rw,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,      sec=sys,mountaddr=192.168.122.186,mountvers=3,mountport=20048,mountproto=tcp,local_lock=none,addr=192.168.122.186\n375 97 0:47 / /mnt/7 rw,relatime shared:204 - nfs vm-nfs7-server:/mnt/7 rw,vers=3,rsize=131072,wsize=131072,namlen=25      5,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.122.186,mountvers=3,mountport=20048,mountproto=tcp,local_lock=none,addr=192.168.122.186\n384 97 0:47 / /mnt/1 rw,relatime shared:209 - nfs vm-nfs7-server:/mnt/1 rw,vers=      3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.122.186,mountvers=3,mountport=20048,moun"..., 12534) = 1876 <0.000046>

--> HERE ABOVE WE HAVE "/mnt/1" mount as well in /run/mount/utab:

 :
92206 1     11:32:36.890567 read(24</run/mount/utab>, "SRC=vm-nfs7-server:/mnt/8 TARGET=/mnt/8 ROOT=/ ATTRS=hard,nfsvers=3,proto=tcp,addr=192.168.122.186 \nSRC=vm-nfs7-server:/mnt/5 TARGET=/mnt/5 ROOT=/ ATTRS=hard,nfsvers=3,proto=tcp,add       r=192.168.122.186 \nSRC=vm-nfs7-server:/mnt/4 TARGET=/mnt/4 ROOT=/ ATTRS=hard,nfsvers=3,proto=tcp,addr=192.168.122.186 \nSRC=vm-nfs7-server:/mnt/3 TARGET=/mnt/3 ROOT=/ ATTRS=hard,nfsvers=3,proto=tcp,addr=192.168.122.186 \nSRC=vm-nf       s7-server:/mnt/7 TARGET=/mnt/7 ROOT=/ ATTRS=hard,nfsvers=3,proto=tcp,addr=192.168.122.186 \nSRC=vm-nfs7-server:/mnt/1 TARGET=/mnt/1 ROOT=/ ATTRS=hard,nfsvers=3,proto=tcp,addr=192.168.122.186 \nSRC=vm-nfs7-server:/mnt/6 TARGET=/mnt/       6 ROOT=/ ATTRS=hard,nfsvers=3,proto=tcp,addr=192.168.122.186 \nSRC=vm-nfs7-server:/mnt/9 TARGET=/mnt/9 ROOT=/ ATTRS=hard,nfsvers=3,proto=tcp,addr=192.168.122.186 \nSRC=vm-nfs7-server:/mnt/2 TARGET=/mnt/2 ROOT=/ ATTRS=hard,nfsvers=3       ,proto=tcp,addr=192.168.122.186 \n", 4096) = 900 <0.000008>
 :

92367 1     11:32:36.907043 waitid(P_ALL, 0,  <unfinished ...>
92368 1     11:32:36.907104 <... waitid resumed>{si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=1133, si_uid=0, si_status=0, si_utime=0, si_stime=0}, WNOHANG|WEXITED|WNOWAIT, NULL) = 0 <0.000055>
 :

92389 1     11:32:36.909425 sendmsg(45<UNIX:[20271->11750]>, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="PRIORITY=4\nSYSLOG_FACILITY=3\nCODE_FILE=../src/core/unit.c\nCODE_LINE=5552\nCODE_FUNC=unit_log_failure\nSYSLOG_IDENTIFIER=sys      temd\n", iov_len=126}, {iov_base="MESSAGE_ID=d9b373ed55a64feb8242e02dbe79a49c", iov_len=43}, {iov_base="\n", iov_len=1}, {iov_base="UNIT=mnt-1.mount", iov_len=16}, {iov_base="\n", iov_len=1}, {iov_base="INVOCATION_ID=72dd55ec127f4f9      6a97d6d5549223188", iov_len=46}, {iov_base="\n", iov_len=1}, {iov_base="MESSAGE=mnt-1.mount: Failed with result 'protocol'.", iov_len=51}, {iov_base="\n", iov_len=1}, {iov_base="UNIT_RESULT=protocol", iov_len=20}, {iov_base="\n", io      v_len=1}], msg_iovlen=11, msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL <unfinished ...>
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

Version-Release number of selected component (if applicable):

All systemd-239 including systemd-239-51.el8_5.5.x86_64

How reproducible:

Always

Steps to Reproduce:
1. Setup 9 NFSv3 mounts in /etc/fstab

-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
vm-nfs7-server:/mnt/1	/mnt/1	nfs	rw,hard,nfsvers=3,proto=tcp	0	0
vm-nfs7-server:/mnt/2	/mnt/2	nfs	rw,hard,nfsvers=3,proto=tcp	0	0
vm-nfs7-server:/mnt/3	/mnt/3	nfs	rw,hard,nfsvers=3,proto=tcp	0	0
vm-nfs7-server:/mnt/4	/mnt/4	nfs	rw,hard,nfsvers=3,proto=tcp	0	0
vm-nfs7-server:/mnt/5	/mnt/5	nfs	rw,hard,nfsvers=3,proto=tcp	0	0
vm-nfs7-server:/mnt/6	/mnt/6	nfs	rw,hard,nfsvers=3,proto=tcp	0	0
vm-nfs7-server:/mnt/7	/mnt/7	nfs	rw,hard,nfsvers=3,proto=tcp	0	0
vm-nfs7-server:/mnt/8	/mnt/8	nfs	rw,hard,nfsvers=3,proto=tcp	0	0
vm-nfs7-server:/mnt/9	/mnt/9	nfs	rw,hard,nfsvers=3,proto=tcp	0	0
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

2. Create/enable a service reloading systemd just after rpc-statd.service finished (/etc/systemd/system/reload-systemd.service)

-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
[Unit]
After=network-online.target

[Service]
Type=oneshot
ExecStart=/bin/sh -c '/bin/sleep 0.4; /bin/systemctl daemon-reload'

[Install]
WantedBy=multi-user.target
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

# systemctl daemon-reload
# systemctl enable reload-systemd

3. Reboot and check the journal to "tune" the delay before executing the reload

The delay must be adjusted for the reload to execute just after rpc-statd.service finished, e.g.!

-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
# journalctl -b -o short-precise | egrep "(/mnt/|Reloading|reload-systemd|status monitor|protocol)"
[...]
Apr 30 12:16:54.080946 vm-remotemounts8 systemd[1]: Starting reload-systemd.service...
Apr 30 12:16:54.096402 vm-remotemounts8 systemd[1]: Mounting /mnt/4...
Apr 30 12:16:54.126040 vm-remotemounts8 systemd[1]: Mounting /mnt/5...
Apr 30 12:16:54.128793 vm-remotemounts8 systemd[1]: Mounting /mnt/9...
Apr 30 12:16:54.157714 vm-remotemounts8 systemd[1]: Mounting /mnt/6...
Apr 30 12:16:54.232126 vm-remotemounts8 systemd[1]: Starting NFS status monitor for NFSv2/3 locking....
Apr 30 12:16:54.346662 vm-remotemounts8 systemd[1]: Started NFS status monitor for NFSv2/3 locking..

---> Proper timing (0.4 seconds above)

Apr 30 12:16:54.576942 vm-remotemounts8 systemd[1]: Reloading.
Apr 30 12:16:54.838234 vm-remotemounts8 systemd[1]: mnt-2.mount: Failed with result 'protocol'.
Apr 30 12:16:54.838448 vm-remotemounts8 systemd[1]: Failed to mount /mnt/2.
[...]
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

Actual results:

Failure of some mount units (but mounts performed anyway)

Expected results:

No failure

Additional info:

In order to reproduce while stracing systemd, the service needs to be adjusted to execute just after rpc-statd.service finished:
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
[Unit]
DefaultDependencies=no
Conflicts=umount.target
After=rpc-statd.service
Requires=rpc-statd.service

[Service]
Type=oneshot
ExecStart=/bin/sh -c '/bin/systemctl daemon-reload'

[Install]
WantedBy=multi-user.target
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

Stracing systemd early during boot using a generator (/usr/lib/systemd/system-generators/strace-systemd):
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
#!/bin/sh
nohup timeout 3m strace -fttTvyy -s 1024 -o /run/systemd.strace.$(date +%s) -p 1 &
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

# chmod +x /usr/lib/systemd/system-generators/strace-systemd

Comment 1 Renaud Métrich 2022-04-30 10:24:57 UTC
Created attachment 1876158 [details]
Strace of the "mount /mnt/1" command (successful)

Comment 2 Renaud Métrich 2022-04-30 10:25:40 UTC
Created attachment 1876159 [details]
Strace of systemd showing /proc/1/mountinfo being read but mount state failing anyway

Comment 3 Renaud Métrich 2022-04-30 11:14:23 UTC
Note: I don't think it's effectively related to NFSv3 only, I would expect this can happen with other mounts as well.
It's just the reproducer that uses NFSv3.

Comment 4 David Tardon 2022-05-02 05:27:41 UTC

*** This bug has been marked as a duplicate of bug 1777530 ***

Comment 9 David Tardon 2022-11-15 12:52:04 UTC
*** Bug 1778980 has been marked as a duplicate of this bug. ***

Comment 10 David Tardon 2022-11-15 12:53:49 UTC
There's a reproducer that doesn't require NFS in bug 1778980 comment 1.

Comment 11 David Tardon 2022-11-15 12:54:42 UTC
(In reply to David Tardon from comment #10)
> There's a reproducer that doesn't require NFS in bug 1778980 comment 1.

... bug 1778980 comment 0.

Comment 14 Darrin Smart 2023-08-29 17:45:41 UTC
I don't have access to bug 1778980, but I suspect this could be fixed upstream:

https://github.com/systemd/systemd/issues/23796

There is a simple test-script in that bug report, which can be used to reproduce the issue on current el8 systemd versions.

In my case the "Mount process finished, but there is no mount" error was happening with /dev/md0 in fstab, perhaps 5-10% probability during system startup.

One thing I did find was that using systemd 253.7-1.4.hs+fb.el8 from the CentOS Hyperscale SIG seemed to fix the bug. Hence the suggestion it is fixed upstream.

Comment 17 RHEL Program Management 2023-09-21 11:24:46 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 18 RHEL Program Management 2023-09-21 11:34:51 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.

Comment 19 潘鑫 2024-08-08 06:58:14 UTC
I don't have access to bug 1778980, Has the patch been merged into the systemd component?

Comment 20 潘鑫 2024-08-08 07:12:10 UTC
(In reply to David Tardon from comment #11)
> (In reply to David Tardon from comment #10)
> > There's a reproducer that doesn't require NFS in bug 1778980 comment 1.
> 
> ... bug 1778980 comment 0.


I don't have access to bug 1778980, and has this issue been fixed?

Comment 21 Christian Horn 2024-08-08 07:15:24 UTC
(In reply to 潘鑫 from comment #20)
> I don't have access to Partnerbug 1778980, and has this issue been fixed?

- bz1778980 got closed as duplicate of this bz here
- and this one here is continued in https://issues.redhat.com/browse/RHEL-5907
- state of https://issues.redhat.com/browse/RHEL-5907 is "new", seems like no
  code did so far get into rhel8 tree to fix this

Comment 22 潘鑫 2024-08-08 07:45:48 UTC
(In reply to Christian Horn from comment #21)
> (In reply to 潘鑫 from comment #20)
> > I don't have access to Partnerbug 1778980, and has this issue been fixed?
> 
> - bz1778980 got closed as duplicate of this bz here
> - and this one here is continued in
> https://issues.redhat.com/browse/RHEL-5907
> - state of https://issues.redhat.com/browse/RHEL-5907 is "new", seems like no
>   code did so far get into rhel8 tree to fix this

     In the latest comments on https://issues.redhat.com/browse/RHEL-5907, two merge requests were mentioned. How can I view these two requests?

Comment 23 Christian Horn 2024-08-08 07:52:34 UTC
(In reply to 潘鑫 from comment #22)
>      In the latest comments on https://issues.redhat.com/browse/RHEL-5907,
> two merge requests were mentioned. How can I view these two requests?

    mount: mark an existing "mounting" unit from /proc/self/mountinfo as "just_mounted"
    https://github.com/systemd/systemd/commit/1d086a6e59729635396204fc05234f1d3caa0847

    core/mount: adjust deserialized state based on /proc/self/mountinfo
    https://github.com/systemd/systemd/commit/01400460ae16c6522d11d08dd9a4b0928e7980d9