Bug 1680171 - containerized radosgw requires higher --cpu-quota as default
Summary: containerized radosgw requires higher --cpu-quota as default
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 3.2
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z2
: 3.2
Assignee: Dimitri Savineau
QA Contact: Vasishta
Bara Ancincova
URL:
Whiteboard:
Depends On:
Blocks: 1629656
TreeView+ depends on / blocked
 
Reported: 2019-02-22 21:56 UTC by John Harrigan
Modified: 2019-12-03 05:21 UTC (History)
22 users (show)

Fixed In Version: RHEL: ceph-ansible-3.2.12-1.el7cp Ubuntu: ceph-ansible_3.2.12-2redhat1
Doc Type: Bug Fix
Doc Text:
.Increased CPU CGroup limit for containerized Ceph Object Gateway The default CPU CGroup limit for containerized Ceph Object Gateway (RGW) was very low and has been increased with this update to be more reasonable for typical Hard Disk Drive (HDD) production environments. However, consider evaluating what limit to set for the site's configuration and workload. To customize the limit, adjust the `ceph_rgw_docker_cpu_limit` parameter in the Ansible `group_vars/rgws.yml` file.
Clone Of:
Environment:
Last Closed: 2019-04-30 15:57:06 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 3776 0 'None' closed radosgw: Raise cpu limit to 8 2020-09-15 13:47:35 UTC
Github ceph ceph-ansible pull 3794 0 'None' closed Automatic backport of pull request #3776 2020-09-15 13:47:35 UTC
Red Hat Knowledge Base (Solution) 4034561 0 Performance tune None Ceph - Red Hat Ceph Storage containerized RGW requires higher CPU quota as default 1. 2019-04-03 16:01:23 UTC
Red Hat Product Errata RHSA-2019:0911 0 None None None 2019-04-30 15:57:21 UTC

Comment 8 Ben England 2019-02-26 19:55:19 UTC
you're right, sloppy on my part.  It does not remove the CGroup limit but it sets it so high that it is as if we had removed the limit - there is no way for the container to get that big.  I have recommended that the memory CGroup limit in ceph-ansible be removed entirely, see discussion in https://github.com/ceph/ceph-ansible/issues/3617

Comment 15 John Harrigan 2019-04-01 14:32:29 UTC
I think Ben's response in C#14 addresses the needinfo request.

Comment 21 Vasishta 2019-04-22 03:24:41 UTC
Observed changes from ceph-ansible's perspective.
Looks intact as per the requirements.
Moving to VERIFIED state.

Comment 24 Ben England 2019-04-28 21:46:05 UTC
Am having trouble reading the doc text in the preceding post here, but got it in the e-mail.   It said "The default CPU quota for containerized Ceph Object Gateway was significantly lower than for bare-metal Ceph Object Gateway. With this update, the default value for the CPU quota (`--cpu-quota`) for Ceph Object Gateways deployed in containers has been increased."

This is incorrect.  There is no CPU quota for bare-metal Ceph Rados (not Object) Gateway.   You could say is that "the default CPU CGroup limit for containerized RGW was very low and has been increased in this update to be more reasonable for typical HDD production environments - however, the sysadmin may want to evaluate what limit should be set for the site's configuration and workload."  Make sense?

Comment 26 Ben England 2019-04-29 19:01:55 UTC
Object Gateway is fine, I don't care which one you call it as long as people are used to that name.   My main concern was that there is no default CPU quota for bare metal configuration, and that problem has been corrected.  I talked with John Brier about that on IRC.  Thx -ben

Comment 28 errata-xmlrpc 2019-04-30 15:57:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:0911


Note You need to log in before you can comment on or make changes to this bug.