RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2015057 - Backport "mount-util: fix fd_is_mount_point() when both the parent and directory are network fs"
Summary: Backport "mount-util: fix fd_is_mount_point() when both the parent and direct...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: systemd
Version: 8.4
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: David Tardon
QA Contact: Frantisek Sumsal
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-10-18 10:16 UTC by Renaud Métrich
Modified: 2022-07-29 10:07 UTC (History)
4 users (show)

Fixed In Version: systemd-239-53.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-05-10 15:25:48 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github redhat-plumbers systemd-rhel8 pull 231 0 None open mount-util: fix fd_is_mount_point() when both the parent and director… 2021-11-04 11:45:24 UTC
Github systemd systemd pull 20896 0 None Merged mount-util: fix fd_is_mount_point() when both the parent and director… 2021-10-18 10:47:27 UTC
Red Hat Issue Tracker RHELPLAN-100111 0 None None None 2021-10-18 10:19:50 UTC
Red Hat Knowledge Base (Solution) 6435221 0 None None None 2021-10-23 18:41:17 UTC
Red Hat Product Errata RHBA-2022:2069 0 None None None 2022-05-10 15:26:32 UTC

Description Renaud Métrich 2021-10-18 10:16:52 UTC
Description of problem:

We have a customer not being able to restart any service that has "ProtectSystem=..." due to having a NFS mount point hosting a CIFS mount point underneath.
This causes the name_to_handle_at_loop() on NFS mount point to return EINVAL when trying to check if the CIFS mount is a mount point. Due to the current RHEL8 code being a fatal error, the service fails to be spawned with "status=226/NAMESPACE" error.

Please backport the following commit ASAP:

commit 964ccab8286a7e75d7e9107f574f5cb23752bd5d
Author: Franck Bui <fbui>
Date:   Thu Sep 30 14:05:36 2021 +0200

    mount-util: fix fd_is_mount_point() when both the parent and directory are network fs
    
    The second call to name_to_handle_at_loop() didn't check for the specific
    errors that can happen when the parent dir is mounted by nfs and instead of
    falling back like it's done for the child dir, fd_is_mount_point() failed in
    this case.


Version-Release number of selected component (if applicable):

systemd-239-45.el8_4.3


How reproducible:

Always on customer system, which requires having NFS mounts which return too large handles.

Additional info:

The customer confirmed that test package I built with the above commit solves the issue.

Comment 1 Frank Sorenson 2021-10-21 22:58:13 UTC
The following should reproduce the issue (or at least mounts that return the same errors):

# mkdir -p /exports/ENOTSUPP_DIR /mnt
# echo "/exports *(rw,no_root_squash)" >/etc/fstab
# systemctl restart nfs-server.service

# mount 127.0.0.1:/ /mnt -overs=4,sec=sys
# mount --bind /var/lib/nfs/rpc_pipefs /mnt/EOPNOTSUPP_DIR


# cat << EOFEOFEOF > /tmp/break_nfs_filehandles.stp
%{
#include <linux/exportfs.h>
#ifndef XDR_QUADLEN
#define XDR_QUADLEN(l)  (((l) + 3) >> 2)
#endif
%}
probe module("nfs").function("nfs_encode_fh").return {
        max_len_addr = @entry($max_len)
        set_kernel_int(max_len_addr, %{ XDR_QUADLEN(MAX_HANDLE_SZ + 16) %})
        $return = %{ FILEID_INVALID %}
}
EOFEOFEOF

# stap -g /tmp/break_nfs_filehandles.stp


nfs mounts (i.e. /tmp) will now return EOVERFLOW with handle_bytes of 16 bytes larger than the maximum supported (so will currently return 144 bytes)
the rpc_pipefs mounted at /tmp/EOPNOTSUPP_DIR doesn't support filehandles, so will return EOPNOTSUPP (easier than setting up samba server & mount)

Comment 2 Frank Sorenson 2021-10-22 01:03:13 UTC
(In reply to Frank Sorenson from comment #1)
> nfs mounts (i.e. /tmp) will now return EOVERFLOW with handle_bytes of 16

make that '/mnt'

Comment 4 Plumber Bot 2021-11-19 14:40:25 UTC
fix merged to github master branch -> https://github.com/redhat-plumbers/systemd-rhel8/pull/231

Comment 10 errata-xmlrpc 2022-05-10 15:25:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (systemd bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:2069


Note You need to log in before you can comment on or make changes to this bug.