Bug 2165644
| Summary: | [Workload-DFG] RHEL 9 (cgroups v2) - the pid limits ARE enforced as compared to RHEL8 (cgroup v1) / [6.0]: rgws crashed when 'rgw_thread_pool_size' is set to 2048 | |||
|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Vidushi Mishra <vimishra> | |
| Component: | Cephadm | Assignee: | Adam King <adking> | |
| Status: | CLOSED ERRATA | QA Contact: | Vidushi Mishra <vimishra> | |
| Severity: | urgent | Docs Contact: | Akash Raj <akraj> | |
| Priority: | unspecified | |||
| Version: | 6.0 | CC: | adking, akraj, ceph-eng-bugs, cephqe-warriors, hyelloji, msaini, pdhange, pnataraj, racpatel, sbaldwin, skoduri, smanjara, tserlin, vdas, vumrao | |
| Target Milestone: | --- | Keywords: | Regression, Scale, TestBlocker | |
| Target Release: | 6.0 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | ceph-17.2.5-68.el9cp | Doc Type: | Bug Fix | |
| Doc Text: |
.The PID limit is removed and workloads in the container no longer crash
Previously, in {os-product} 9 deployments, pid limits were enforced which limited the number of processes able to run inside the container. Due to this, certain operations, such as Ceph Object Gateway sync, would crash.
With this fix, the `pid limit` is set to ‘unlimited’ on all Ceph containers, preventing the workloads in the container from crashing.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 2172314 (view as bug list) | Environment: | ||
| Last Closed: | 2023-03-20 18:59:51 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 2172314 | |||
|
Comment 17
Adam King
2023-02-16 12:56:21 UTC
*** Bug 2118411 has been marked as a duplicate of this bug. *** Below for tcmu and iscsi containers
TCMU-
[root@magna021 ~]# podman inspect a7778d4fb3f5 | grep -i limit
"--pids-limit=-1",
"PidsLimit": 0,
"Ulimits": [
"Name": "RLIMIT_NOFILE",
"Name": "RLIMIT_NPROC",
[root@magna021 ~]# podman exec -it a7778d4fb3f5 cat /sys/fs/cgroup/pids/pids.max
max
ISCSI-
[root@magna021 ~]# podman inspect 8b07ad250b66 | grep -i limit
"--pids-limit=-1",
"PidsLimit": 0,
"Ulimits": [
"Name": "RLIMIT_NOFILE",
"Name": "RLIMIT_NPROC",
[root@magna021 ~]# podman exec -it 8b07ad250b66 cat /sys/fs/cgroup/pids/pids.max
max
(In reply to Preethi from comment #33) > Below for tcmu and iscsi containers > TCMU- > [root@magna021 ~]# podman inspect a7778d4fb3f5 | grep -i limit > "--pids-limit=-1", > "PidsLimit": 0, > "Ulimits": [ > "Name": "RLIMIT_NOFILE", > "Name": "RLIMIT_NPROC", > [root@magna021 ~]# podman exec -it a7778d4fb3f5 cat > /sys/fs/cgroup/pids/pids.max > max > > ISCSI- > [root@magna021 ~]# podman inspect 8b07ad250b66 | grep -i limit > "--pids-limit=-1", > "PidsLimit": 0, > "Ulimits": [ > "Name": "RLIMIT_NOFILE", > "Name": "RLIMIT_NPROC", > [root@magna021 ~]# podman exec -it 8b07ad250b66 cat > /sys/fs/cgroup/pids/pids.max > max [root@magna021 ~]# podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 037c587c590f registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.10 --no-collector.ti... 2 weeks ago Up 2 weeks ago ceph-1a371c1a-abab-11ed-bd89-ac1f6b0a1874-node-exporter-magna021 a7778d4fb3f5 registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:7af968b66e38670936f23547b51b651b42c438114d49698f63a6e5fd74c9c5bd 6 days ago Up 6 days ago ceph-1a371c1a-abab-11ed-bd89-ac1f6b0a1874-iscsi-iscsi5-magna021-emebgp-tcmu 8b07ad250b66 registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:7af968b66e38670936f23547b51b651b42c438114d49698f63a6e5fd74c9c5bd 6 days ago Up 6 days ago ceph-1a371c1a-abab-11ed-bd89-ac1f6b0a1874-iscsi-iscsi5-magna021-emebgp Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:1360 |