you're right, sloppy on my part. It does not remove the CGroup limit but it sets it so high that it is as if we had removed the limit - there is no way for the container to get that big. I have recommended that the memory CGroup limit in ceph-ansible be removed entirely, see discussion in https://github.com/ceph/ceph-ansible/issues/3617
I think Ben's response in C#14 addresses the needinfo request.
Observed changes from ceph-ansible's perspective. Looks intact as per the requirements. Moving to VERIFIED state.
Am having trouble reading the doc text in the preceding post here, but got it in the e-mail. It said "The default CPU quota for containerized Ceph Object Gateway was significantly lower than for bare-metal Ceph Object Gateway. With this update, the default value for the CPU quota (`--cpu-quota`) for Ceph Object Gateways deployed in containers has been increased." This is incorrect. There is no CPU quota for bare-metal Ceph Rados (not Object) Gateway. You could say is that "the default CPU CGroup limit for containerized RGW was very low and has been increased in this update to be more reasonable for typical HDD production environments - however, the sysadmin may want to evaluate what limit should be set for the site's configuration and workload." Make sense?
Object Gateway is fine, I don't care which one you call it as long as people are used to that name. My main concern was that there is no default CPU quota for bare metal configuration, and that problem has been corrected. I talked with John Brier about that on IRC. Thx -ben
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2019:0911