RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1758509 - Cannot run systemd-container with SCL service due to RHSA-2019:2091 fix
Summary: Cannot run systemd-container with SCL service due to RHSA-2019:2091 fix
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: podman
Version: 7.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: 7.8
Assignee: Jindrich Novy
QA Contact: Martin Jenner
URL:
Whiteboard:
Depends On:
Blocks: 1186913 1744505
TreeView+ depends on / blocked
 
Reported: 2019-10-04 11:52 UTC by Christoffer Reijer
Modified: 2020-10-29 14:05 UTC (History)
15 users (show)

Fixed In Version: podman 1.6.4-7.el7_8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-04-01 00:25:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 4420581 0 None None None 2019-10-04 11:58:08 UTC
Red Hat Product Errata RHSA-2020:1227 0 None None None 2020-04-01 00:26:12 UTC

Description Christoffer Reijer 2019-10-04 11:52:31 UTC
Description of problem:

I cannot start services inside systemd-container (like docker.io/centos/systemd or registry.redhat.io/ubi7-init) that are using SCL, after updating to 7.7.


Version-Release number of selected component (if applicable):

systemd-219-67.el7_7.1.x86_64
podman-1.4.4-4.el7.x86_64


How reproducible:
Every time

Steps to Reproduce:

$ cat <<EOF > Dockerfile
FROM centos/systemd
RUN yum -y update && \
    yum -y install centos-release-scl && \
    yum -y install rh-mongodb34-mongodb-server && \
    systemctl enable rh-mongodb34-mongod.service
EOF
$ sudo podman build -t test .
$ sudo podman run --systemd=true --name test -d --privileged test
$ sudo podman exec -it test systemctl status rh-mongodb34-mongod

Actual results:

> New main PID 193 does not belong to service, and PID file is not owned by root. Refusing.

Expected results:

Service should be running.

Additional info:

Filed a bug with libpod as well: https://github.com/containers/libpod/issues/4191

Comment 2 Renaud Métrich 2019-10-04 11:57:11 UTC
This is not a systemd bug, but application bug.

See https://access.redhat.com/solutions/4420581

In a nutshell, the service unit is executing some program that doesn't run in the expected cgroup (system.slice -> <service>.service), but probably in a user session, due to using "sudo" or "su"

Comment 3 Christoffer Reijer 2019-10-04 12:26:57 UTC
I am aware of the KB and the reason why it fails, but I can't do much about it since that's the way the mongodb service is packaged.

Where should I file a bug report? I have tried in Foreman (where I initially found the issue), then libpod (since I use podman to start the container as a systemd container) and was then asked to file it here.

Another thing I noted was that running on a RHEL server is fails when using the CentOS container that is given above, but when I changed to ubi7-init and used `yum-config-manager --enable rhel-server-rhscl-7-rpms`, all other commands equal, it works.

It is the same version of the RPM file for mongodb and systemd inside the container. Not sure really what's changed but it seems that it only happens in the CentOS SCL and no the Red Hat SCL.

Where should I file a bug report to get this fixed?

Thanks!

Comment 4 Renaud Métrich 2019-10-04 12:30:38 UTC
I guess you should file against centos-scl then.

Comment 5 Ramesh Sahoo 2019-10-08 13:49:45 UTC
The problem looks to be on podman but not with systemd. 

<snip>

# podman start mongob
Error: unable to start container "mongob": container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\\"/sys/fs/cgroup/systemd/machine.slice/libpod-d02aaf40cc0142ec5d2a7dba2e29aa756bc919eca9eb20a89bb69bd477e4e707.scope\\\" to rootfs \\\"/var/lib/containers/storage/overlay/9858159dbbcb30fdab3ba11ee8ecaf6ae1d2397dc5e68b8f3859ceb5611c70b2/merged\\\" at \\\"/sys/fs/cgroup/systemd\\\" caused \\\"stat /sys/fs/cgroup/systemd/machine.slice/libpod-d02aaf40cc0142ec5d2a7dba2e29aa756bc919eca9eb20a89bb69bd477e4e707.scope: no such file or directory\\\"\""
: OCI runtime error

<snip>

I checked that with docker-1.13.1-103.git7f2769b.el7.x86_64, I am able to run mongodb on centos container. 

# systemctl status rh-mongodb34-mongod
● rh-mongodb34-mongod.service - High-performance, schema-free document-oriented database
   Loaded: loaded (/usr/lib/systemd/system/rh-mongodb34-mongod.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-10-08 13:21:27 UTC; 24min ago
  Process: 68 ExecStart=/opt/rh/rh-mongodb34/root/usr/libexec/mongodb-scl-helper enable $RH_MONGODB34_SCLS_ENABLED -- /opt/rh/rh-mongodb34/root/usr/bin/mongod $OPTIONS run (code=exited, status=0/SUCCESS)
 Main PID: 74 (mongod)
   CGroup: /system.slice/docker-c91cdf7b18086164b1e055c80b432a2923bbfed4e4a8ff4aa36efd9dcb745c6f.scope/system.slice/rh-mongodb34-mongod.service
           └─74 /opt/rh/rh-mongodb34/root/usr/bin/mongod -f /etc/opt/rh/rh-mongodb34/mongod.conf run

Oct 08 13:21:27 c91cdf7b1808 systemd[1]: Starting High-performance, schema-free document-oriented database...
Oct 08 13:21:27 c91cdf7b1808 mongodb-scl-helper[68]: about to fork child process, waiting until server is ready for connections.
Oct 08 13:21:27 c91cdf7b1808 mongodb-scl-helper[68]: forked process: 74
Oct 08 13:21:27 c91cdf7b1808 systemd[1]: Started High-performance, schema-free document-oriented database.


$ docker version 
Client:
 Version:         1.13.1
 API version:     1.26
 Package version: docker-1.13.1-103.git7f2769b.el7.x86_64
 Go version:      go1.10.8
 Git commit:      7f2769b/1.13.1
 Built:           Fri Aug  2 10:19:53 2019
 OS/Arch:         linux/amd64

Server:
 Version:         1.13.1
 API version:     1.26 (minimum version 1.12)
 Package version: docker-1.13.1-103.git7f2769b.el7.x86_64
 Go version:      go1.10.8
 Git commit:      7f2769b/1.13.1
 Built:           Fri Aug  2 10:19:53 2019
 OS/Arch:         linux/amd64
 Experimental:    false


So filling a bug with podman will address the issue.

Comment 6 Christoffer Reijer 2019-10-08 14:22:37 UTC
As per the description I did file a bug with podman but they redirected me to bugzilla and speculated that the issue is with systemd. However, I suspect that it is with podman, not properly taking https://access.redhat.com/solutions/4420581 into account when using `--systemd`

Also, I am able to start the container with podman, not getting the OCI error that Ramesh is seeing. My issue is that the service inside the container cannot start, due to systemd not trusting the PID file. My guess is that there is some sort of mismatch between the cgroups on host vs container, but I don't know enough about systemd to know exactly what's going on here.

Perhaps someone could explain to me exactly how systemd determines whether or not to trust the PID file?

Comment 7 Ramesh Sahoo 2019-10-08 14:55:14 UTC
I will put some quality time to investigate the podman issue tomorrow and update result.

Comment 8 Kyle Walker 2020-02-15 20:33:11 UTC
I did go ahead and update podman to a 1.6.x revision on the same RHEL 7 host where I reproduced the problem and found that the newer revision was no longer susceptible to this problem.

# podman --version
podman version 1.6.4

# podman run --systemd=true --name test -d --privileged test
1e5976482e56ff8e53943ff4ed8840747c9dc9751d4ead102f8808fa78ddfac5

# podman exec -it test systemctl status rh-mongodb34-mongod
● rh-mongodb34-mongod.service - High-performance, schema-free document-oriented database
   Loaded: loaded (/usr/lib/systemd/system/rh-mongodb34-mongod.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2020-02-15 20:28:07 UTC; 957ms ago
  Process: 73 ExecStart=/opt/rh/rh-mongodb34/root/usr/libexec/mongodb-scl-helper enable $RH_MONGODB34_SCLS_ENABLED -- /opt/rh/rh-mongodb34/root/usr/bin/mongod $OPTIONS run (code=exited, status=0/SUCCESS)
 Main PID: 85 (mongod)
   CGroup: /machine.slice/libpod-1e5976482e56ff8e53943ff4ed8840747c9dc9751d4ead102f8808fa78ddfac5.scope/system.slice/rh-mongodb34-mongod.service
           └─85 /opt/rh/rh-mongodb34/root/usr/bin/mongod -f /etc/opt/rh/rh-mongodb34/mongod.conf run

Feb 15 20:28:04 1e5976482e56 systemd[1]: Starting High-performance, schema-free document-oriented database...
Feb 15 20:28:06 1e5976482e56 mongodb-scl-helper[73]: about to fork child process, waiting until server is ready for connections.
Feb 15 20:28:06 1e5976482e56 mongodb-scl-helper[73]: forked process: 85
Feb 15 20:28:07 1e5976482e56 systemd[1]: Started High-performance, schema-free document-oriented database.

With that being the case, I'm fairly well convinced that the problem is actually in the podman stack. To that end, I am moving this bug to the applicable component to track the ongoing efforts to update to the 1.6.x podman revision in a future release.

Comment 10 Tom Sweeney 2020-02-17 18:39:56 UTC
Setting to post and assigning to to Jindrich on the off chance there's any packaging concerns.  I don't believe there are as this appears to be fixed as of podman 1.6.4-7.el7_8

Comment 16 errata-xmlrpc 2020-04-01 00:25:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:1227


Note You need to log in before you can comment on or make changes to this bug.