Bug 1615303
Summary: | When nfs-utils happens to be installed in container image, running systemd in the container shows degraded status | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Jan Pazdziora (Red Hat) <jpazdziora> |
Component: | nfs-utils | Assignee: | Steve Dickson <steved> |
Status: | CLOSED WONTFIX | QA Contact: | Yongcheng Yang <yoyang> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 7.5 | CC: | bfields, extras-qa, jlayton, jpazdziora, steved, xzhou, yoyang |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | 1615101 | Environment: | |
Last Closed: | 2021-02-15 07:41:29 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1615101 | ||
Bug Blocks: |
Description
Jan Pazdziora (Red Hat)
2018-08-13 09:34:48 UTC
I see the same issue on RHEL 7.5 -- docker on RHEL 7.5 and FROM registry.access.redhat.com/rhel7 and nfs-utils-1.3.0-0.54.el7.x86_64 in the container. var-lib-nfs-rpc_pipefs.mount: Mount process exited, code=exited status=32 32 == EPIPE. If that's what was actually returned from the mount system call, that's surprising. (Maybe an strace to confirm this would help?) I'd expect EPIPE on a read or write to a pipefs file but I wonder what it could mean on mount. Would it be possible to explain (to someone ignorant of docker) what this container looks like from the kernel's point of view? E.g. which kinds of namespaces are the mount process running in? Is it still running as a process that's root in the init namespace? Oh, sorry, I see, it was actually EACCES: mount: /var/lib/nfs/rpc_pipefs: permission denied. That makes more sense. After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. |