Bug 1391500 - ceph-ansible should enable default bucket object quota for new ceph installs
Summary: ceph-ansible should enable default bucket object quota for new ceph installs
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat Storage
Component: ceph-ansible
Version: 2
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 2
Assignee: Ali Maredia
QA Contact: Tejas
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-03 12:51 UTC by Uday Boppana
Modified: 2017-06-19 13:15 UTC (History)
19 users (show)

Fixed In Version: ceph-ansible-2.2.5-1.el7scon
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-06-19 13:15:47 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 1508 0 None None None 2017-05-10 17:13:22 UTC
Red Hat Bugzilla 1389845 0 unspecified CLOSED [RFE] Set a default quota on the bucket index 2022-02-21 18:05:47 UTC
Red Hat Product Errata RHBA-2017:1496 0 normal SHIPPED_LIVE ceph-installer, ceph-ansible, and ceph-iscsi-ansible update 2017-06-19 17:14:02 UTC

Internal Links: 1389845

Description Uday Boppana 2016-11-03 12:51:28 UTC
When a customer deploys a new ceph cluster using ceph-ansible, we should enable a default quota of 100K objects for number of objects in a RGW bucket.

Comment 2 seb 2016-11-03 15:09:27 UTC
If you need to change a specific configuration option of ceph you can use the "ceph_conf_overrides" variable and edit the appropriate section

Comment 3 Alfredo Deza 2016-11-03 15:39:06 UTC
In what situations we wouldn't want this as a default? 

Is this just a ceph.conf change? (what is the actual key? does it need to be in global or the rgw section is fine?)

Is there any Ceph version where this setting wouldn't make sense? Or are we assuming a blanket all-cases will work?

Comment 5 Uday Boppana 2016-11-03 15:53:09 UTC
- We want this quota to be the  default for all new ceph installs in 2.1
- Matt can help with what need to be changed to  make this happen
- We want this as default for all new installs and no exceptions.

Comment 6 seb 2016-11-03 15:59:46 UTC
We need clarification on this. So to enforce Alfredo's comment:

* is a ceph configuration option? (to declare into the ceph.conf)
* is it something to configure at a pool level or a command to send out using the radosgw-admin cli?

Without this we can not help you further.

Comment 7 Matt Benjamin (redhat) 2016-11-03 16:15:04 UTC
(In reply to seb from comment #6)
> We need clarification on this. So to enforce Alfredo's comment:
> 
> * is a ceph configuration option? (to declare into the ceph.conf)
> * is it something to configure at a pool level or a command to send out
> using the radosgw-admin cli?
> 
> Without this we can not help you further.

Yes, there is a ceph.conf option bucket_default_quota_max_objects that should be changed from it's default value of "inifinite/no limit" to "100K * <the chosen number of bucket shards>"

(the presumption above is that the rgw metadata pool contains sufficient pgs and the pgs sufficient OSD residency to be effective, of course)

yehuda, can you provide your formal ack here (adjusting the above as appropriate)?

Matt

Comment 8 Matt Benjamin (redhat) 2016-11-03 16:22:05 UTC
(In reply to Matt Benjamin (redhat) from comment #7)

> Yes, there is a ceph.conf option bucket_default_quota_max_objects that
> should be changed from it's default value of "inifinite/no limit" to "100K *
> <the chosen number of bucket shards>"
> 

sorry, from its present value of "<no value>" which has an internal value of "-1"--the effect of which is "infinite/no limit"

Comment 9 seb 2016-11-03 16:37:12 UTC
Thanks Matt for the clarification.

I'm not a big fan of hardcoding any value into the installer.
To me it's up to the person installing the product to configure it properly, even more when it comes to configuration Ceph options based on the deployment topology.

So what I'm proposing is to use the "ceph_conf_overrides" variable.
Unfortunately we have a bug in the module generating the template when we use variable while declaring sections, normally you would do something like:

ceph_conf_overrides:
  "client.rgw.{{ hostvars[inventory_hostname]['ansible_hostname'] }}":
    "bucket default quota max objects": "YOUR_DESIRED_VALUE"

If you look I already reported the issue to the maintainer of the module, hopefully we can have this fix soon: https://github.com/ceph/ceph-ansible/pull/1018

Thanks!

Comment 10 Uday Boppana 2016-11-03 16:43:28 UTC
@seb - This hard-coding is  a one time exception that we want to hard code. This issue with increasing indexes is causing cluster down scenarios in customer deployments. So we want to hardcode the value for now as a safe guard, if the issue with sections above is not fixed in time for 2.1

Comment 11 seb 2016-11-03 16:51:33 UTC
Alright, give me the line and I'll add it to the code and do the cherry-pick downstream but since Matt said we need to apply the following formula "100K *
> <the chosen number of bucket shards>" can we really get a hardcoded value?

Comment 12 Ken Dreyer (Red Hat) 2016-11-03 17:01:35 UTC
Uday, why is the change at https://github.com/ceph/ceph/pull/11711 not sufficient?

Comment 13 Uday Boppana 2016-11-03 17:13:03 UTC
@ken That PR 11711  would also apply the limit to new buckets created in existing installations - For 2.1 we want to impose the limit only for new ceph installs and not on existing ones. So, this BZ.

Comment 32 Ali Maredia 2017-04-25 21:02:25 UTC
After a meeting with RGW eng we decided to change rgw_override_bucket_index_max_shards to 16 instead of 32. 

Below is a PR adding those two conf vars to ceph-ansible master:

https://github.com/ceph/ceph-ansible/pull/1458

Below is a PR adding those two conf vars to ceph-ansible stable-2.2 branch:

https://github.com/ceph/ceph-ansible/pull/1474

Comment 42 Ian Colle 2017-05-15 14:54:35 UTC
Looks like doc fix needed too. Please confirm.

Comment 43 Daniel Gryniewicz 2017-05-15 15:20:27 UTC
The issue is that default quota has 2 effects:

1. It applies a quota to every relevant op, if no user/bucket quota is set.

2. radosgw-admin uses it to set the relevant user/bucket quota by default.

These are orthogonal (that is, we don't rely on radosgw-admin doing the right thing), but since radosgw-admin doesn't use the client-rgw-* config by default, it won't apply the config if it's in that section.  Instead, it needs to be in the global section for both RGW and radosgw-admin to see it.  Docs should probably be updated.

Comment 44 Ian Colle 2017-05-16 16:48:11 UTC
Backport PR merged, please move to POST.

Comment 46 John Poelstra 2017-05-17 15:16:14 UTC
discussed at program meeting back porting work should be merged today and in ON_QA by tomorrow

Comment 51 errata-xmlrpc 2017-06-19 13:15:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1496


Note You need to log in before you can comment on or make changes to this bug.