Bug 2165644 - [Workload-DFG] RHEL 9 (cgroups v2) - the pid limits ARE enforced as compared to RHEL8 (cgroup v1) / [6.0]: rgws crashed when 'rgw_thread_pool_size' is set to 2048
Summary: [Workload-DFG] RHEL 9 (cgroups v2) - the pid limits ARE enforced as compared ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 6.0
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 6.0
Assignee: Adam King
QA Contact: Vidushi Mishra
Akash Raj
URL:
Whiteboard:
: 2100790 2118411 (view as bug list)
Depends On:
Blocks: 2172314
TreeView+ depends on / blocked
 
Reported: 2023-01-30 16:25 UTC by Vidushi Mishra
Modified: 2023-05-29 06:59 UTC (History)
15 users (show)

Fixed In Version: ceph-17.2.5-68.el9cp
Doc Type: Bug Fix
Doc Text:
.The PID limit is removed and workloads in the container no longer crash Previously, in {os-product} 9 deployments, pid limits were enforced which limited the number of processes able to run inside the container. Due to this, certain operations, such as Ceph Object Gateway sync, would crash. With this fix, the `pid limit` is set to ‘unlimited’ on all Ceph containers, preventing the workloads in the container from crashing.
Clone Of:
: 2172314 (view as bug list)
Environment:
Last Closed: 2023-03-20 18:59:51 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 58685 0 None None None 2023-02-09 21:29:46 UTC
Github ceph ceph pull 50083 0 None open cephadm: set pids-limit unlimited for all ceph daemons 2023-02-16 04:45:37 UTC
Red Hat Issue Tracker RHCEPH-6047 0 None None None 2023-01-30 16:26:44 UTC
Red Hat Product Errata RHBA-2023:1360 0 None None None 2023-03-20 19:00:19 UTC

Comment 17 Adam King 2023-02-16 12:56:21 UTC
*** Bug 2100790 has been marked as a duplicate of this bug. ***

Comment 24 Steve Baldwin 2023-02-23 23:21:20 UTC
*** Bug 2118411 has been marked as a duplicate of this bug. ***

Comment 33 Preethi 2023-03-07 07:32:53 UTC
Below for tcmu and iscsi containers
TCMU-
[root@magna021 ~]# podman inspect a7778d4fb3f5 | grep -i limit
                    "--pids-limit=-1",
               "PidsLimit": 0,
               "Ulimits": [
                         "Name": "RLIMIT_NOFILE",
                         "Name": "RLIMIT_NPROC",
[root@magna021 ~]# podman exec -it a7778d4fb3f5 cat /sys/fs/cgroup/pids/pids.max
max

ISCSI-
[root@magna021 ~]# podman inspect 8b07ad250b66 | grep -i limit
                    "--pids-limit=-1",
               "PidsLimit": 0,
               "Ulimits": [
                         "Name": "RLIMIT_NOFILE",
                         "Name": "RLIMIT_NPROC",
[root@magna021 ~]# podman exec -it 8b07ad250b66 cat /sys/fs/cgroup/pids/pids.max
max

Comment 34 Preethi 2023-03-07 07:34:01 UTC
(In reply to Preethi from comment #33)
> Below for tcmu and iscsi containers
> TCMU-
> [root@magna021 ~]# podman inspect a7778d4fb3f5 | grep -i limit
>                     "--pids-limit=-1",
>                "PidsLimit": 0,
>                "Ulimits": [
>                          "Name": "RLIMIT_NOFILE",
>                          "Name": "RLIMIT_NPROC",
> [root@magna021 ~]# podman exec -it a7778d4fb3f5 cat
> /sys/fs/cgroup/pids/pids.max
> max
> 
> ISCSI-
> [root@magna021 ~]# podman inspect 8b07ad250b66 | grep -i limit
>                     "--pids-limit=-1",
>                "PidsLimit": 0,
>                "Ulimits": [
>                          "Name": "RLIMIT_NOFILE",
>                          "Name": "RLIMIT_NPROC",
> [root@magna021 ~]# podman exec -it 8b07ad250b66 cat
> /sys/fs/cgroup/pids/pids.max
> max


[root@magna021 ~]# podman ps -a
CONTAINER ID  IMAGE                                                                                                                         COMMAND               CREATED      STATUS          PORTS       NAMES
037c587c590f  registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.10                                                              --no-collector.ti...  2 weeks ago  Up 2 weeks ago              ceph-1a371c1a-abab-11ed-bd89-ac1f6b0a1874-node-exporter-magna021
a7778d4fb3f5  registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:7af968b66e38670936f23547b51b651b42c438114d49698f63a6e5fd74c9c5bd                        6 days ago   Up 6 days ago               ceph-1a371c1a-abab-11ed-bd89-ac1f6b0a1874-iscsi-iscsi5-magna021-emebgp-tcmu
8b07ad250b66  registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:7af968b66e38670936f23547b51b651b42c438114d49698f63a6e5fd74c9c5bd                        6 days ago   Up 6 days ago               ceph-1a371c1a-abab-11ed-bd89-ac1f6b0a1874-iscsi-iscsi5-magna021-emebgp

Comment 38 errata-xmlrpc 2023-03-20 18:59:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:1360


Note You need to log in before you can comment on or make changes to this bug.