Bug 1856533 - Unsupported podman container configuration via systemd
Summary: Unsupported podman container configuration via systemd
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 5.0
Assignee: Juan Miguel Olmo
QA Contact: Sunil Kumar Nagaraju
Karen Norteman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-13 21:37 UTC by Dimitri Savineau
Modified: 2021-08-30 08:26 UTC (History)
6 users (show)

Fixed In Version: ceph-16.0.0-6817.el8cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-30 08:26:04 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 46654 0 None None None 2020-07-21 12:16:29 UTC
Red Hat Issue Tracker RHCEPH-1037 0 None None None 2021-08-27 04:50:42 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:26:18 UTC

Description Dimitri Savineau 2020-07-13 21:37:32 UTC
Description of problem:
As per https://bugzilla.redhat.com/show_bug.cgi?id=1834974#c4 running podman containers via systemd without PIDfile and Type=forking attributes isn't a supported configuration.

Version-Release number of selected component (if applicable):

# ceph --version
ceph version 15.2.4-9.el8cp (fd4d62d568194c0dbb787e1845e65ed5c1de1b1f) octopus (stable)
# rpm -qa cephadm
cephadm-15.2.4-9.el8cp.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Deploy ceph with cephadm

Actual results:

# systemctl show ceph-b2692c62-c535-11ea-a48b-fa163e07380d.service|egrep '(Type|PIDFile)'
Type=simple

# systemctl cat ceph-b2692c62-c535-11ea-a48b-fa163e07380d.service
# /etc/systemd/system/ceph-b2692c62-c535-11ea-a48b-fa163e07380d@.service
# generated by cephadm
[Unit]
Description=Ceph %i for b2692c62-c535-11ea-a48b-fa163e07380d

# According to:
#   http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget
# these can be removed once ceph-mon will dynamically change network
# configuration.
After=network-online.target local-fs.target time-sync.target
Wants=network-online.target local-fs.target time-sync.target

PartOf=ceph-b2692c62-c535-11ea-a48b-fa163e07380d.target
Before=ceph-b2692c62-c535-11ea-a48b-fa163e07380d.target

[Service]
LimitNOFILE=1048576
LimitNPROC=1048576
EnvironmentFile=-/etc/environment
ExecStartPre=-/bin/podman rm ceph-b2692c62-c535-11ea-a48b-fa163e07380d-%i
ExecStart=/bin/bash /var/lib/ceph/b2692c62-c535-11ea-a48b-fa163e07380d/%i/unit.run
ExecStop=-/bin/podman stop ceph-b2692c62-c535-11ea-a48b-fa163e07380d-%i
ExecStopPost=-/bin/bash /var/lib/ceph/b2692c62-c535-11ea-a48b-fa163e07380d/%i/unit.poststop
KillMode=none
Restart=on-failure
RestartSec=10s
TimeoutStartSec=120
TimeoutStopSec=120
StartLimitInterval=30min
StartLimitBurst=5

[Install]
WantedBy=ceph-b2692c62-c535-11ea-a48b-fa163e07380d.target


Expected results:

# systemctl show ceph-b2692c62-c535-11ea-a48b-fa163e07380d.service|egrep '(Type|PIDFile)'
PIDFile=/path/to/container/pidfile.pid
Type=forking

Comment 1 RHEL Program Management 2020-07-13 21:37:39 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 2 Adam King 2020-07-28 13:59:37 UTC
Does this cause any issues beyond the overlays taking up extra disk space as they did in the ansible bug ( https://bugzilla.redhat.com/show_bug.cgi?id=1834974 )? I've tried testing to see if this happens in cephadm using a combination of repeated bootstrap and rm-cluster commands and it seems like the overlays are properly deleted when rm-cluster is run. Is this not the case for your setup? If you're also not seeing the overlay buildup, is there some other issue this causes we should be aware of?

Comment 3 Dimitri Savineau 2020-07-28 15:01:58 UTC
The overlay growth issue may not be an issue for cephadm but the systemd configuration is still not what the podman team recommend/support

Comment 10 errata-xmlrpc 2021-08-30 08:26:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.