Bug 1615303

Summary: When nfs-utils happens to be installed in container image, running systemd in the container shows degraded status
Product: Red Hat Enterprise Linux 7 Reporter: Jan Pazdziora (Red Hat) <jpazdziora>
Component: nfs-utilsAssignee: Steve Dickson <steved>
Status: CLOSED WONTFIX QA Contact: Yongcheng Yang <yoyang>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 7.5CC: bfields, extras-qa, jlayton, jpazdziora, steved, xzhou, yoyang
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1615101 Environment:
Last Closed: 2021-02-15 07:41:29 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1615101    
Bug Blocks:    

Description Jan Pazdziora (Red Hat) 2018-08-13 09:34:48 UTC
+++ This bug was initially created as a clone of Bug #1615101 +++

Description of problem:

When nfs-utils happens to be installed in container image, running systemd in the container shows degraded status due to failed var-lib-nfs-rpc_pipefs.mount.

Version-Release number of selected component (if applicable):

nfs-utils-2.3.2-1.rc3.fc28.x86_64

How reproducible:

Deterministic.

Steps to Reproduce:
1. Have Dockerfile

FROM registry.fedoraproject.org/fedora:28
RUN dnf install -y nfs-utils && dnf clean all
ENV container docker
ENTRYPOINT [ "/usr/sbin/init" ]

2. Build image: docker build -t systemd .
3. Run the systemd container: docker run --name systemd --rm -d systemd
4. Check things in the container: docker exec systemd systemctl status | grep State
5. Check var-lib-nfs-rpc_pipefs.mount specifically: docker exec systemd systemctl status var-lib-nfs-rpc_pipefs.mount

Actual results:

    State: degraded

● var-lib-nfs-rpc_pipefs.mount - RPC Pipe File System
   Loaded: loaded (/usr/lib/systemd/system/var-lib-nfs-rpc_pipefs.mount; static; vendor preset: disabled)
   Active: failed (Result: exit-code) since Sun 2018-08-12 10:31:10 UTC; 56s ago
    Where: /var/lib/nfs/rpc_pipefs
     What: sunrpc
  Process: 29 ExecMount=/usr/bin/mount sunrpc /var/lib/nfs/rpc_pipefs -t rpc_pipefs (code=exited, status=32)

Aug 12 10:31:10 0a43512bc7f9 mount[29]: mount: /var/lib/nfs/rpc_pipefs: permission denied.
Aug 12 10:31:10 0a43512bc7f9 systemd[1]: Mounting RPC Pipe File System...
Aug 12 10:31:10 0a43512bc7f9 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Mount process exited, code=exited status=32
Aug 12 10:31:10 0a43512bc7f9 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Failed with result 'exit-code'.
Aug 12 10:31:10 0a43512bc7f9 systemd[1]: Failed to mount RPC Pipe File System.

Expected results:

State: running

The var-lib-nfs-rpc_pipefs.mount not loaded and/or not failing.

Additional info:

Basically the same happy output when I get when I comment out that "RUN dnf ..." line from the Dockerfile.

Comment 2 Jan Pazdziora (Red Hat) 2018-08-13 09:36:16 UTC
I see the same issue on RHEL 7.5 -- docker on RHEL 7.5 and

FROM registry.access.redhat.com/rhel7

and nfs-utils-1.3.0-0.54.el7.x86_64 in the container.

Comment 4 J. Bruce Fields 2018-08-13 16:36:26 UTC
var-lib-nfs-rpc_pipefs.mount: Mount process exited, code=exited status=32

32 == EPIPE.

If that's what was actually returned from the mount system call, that's surprising.  (Maybe an strace to confirm this would help?)  I'd expect EPIPE on a read or write to a pipefs file but I wonder what it could mean on mount.

Would it be possible to explain (to someone ignorant of docker) what this container looks like from the kernel's point of view?  E.g. which kinds of namespaces are the mount process running in?  Is it still running as a process that's root in the init namespace?

Comment 5 J. Bruce Fields 2018-08-13 16:45:29 UTC
Oh, sorry, I see, it was actually EACCES: mount: /var/lib/nfs/rpc_pipefs: permission denied.  That makes more sense.

Comment 8 RHEL Program Management 2021-02-15 07:41:29 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.