When a customer deploys a new ceph cluster using ceph-ansible, we should enable a default quota of 100K objects for number of objects in a RGW bucket.
If you need to change a specific configuration option of ceph you can use the "ceph_conf_overrides" variable and edit the appropriate section
In what situations we wouldn't want this as a default? Is this just a ceph.conf change? (what is the actual key? does it need to be in global or the rgw section is fine?) Is there any Ceph version where this setting wouldn't make sense? Or are we assuming a blanket all-cases will work?
- We want this quota to be the default for all new ceph installs in 2.1 - Matt can help with what need to be changed to make this happen - We want this as default for all new installs and no exceptions.
We need clarification on this. So to enforce Alfredo's comment: * is a ceph configuration option? (to declare into the ceph.conf) * is it something to configure at a pool level or a command to send out using the radosgw-admin cli? Without this we can not help you further.
(In reply to seb from comment #6) > We need clarification on this. So to enforce Alfredo's comment: > > * is a ceph configuration option? (to declare into the ceph.conf) > * is it something to configure at a pool level or a command to send out > using the radosgw-admin cli? > > Without this we can not help you further. Yes, there is a ceph.conf option bucket_default_quota_max_objects that should be changed from it's default value of "inifinite/no limit" to "100K * <the chosen number of bucket shards>" (the presumption above is that the rgw metadata pool contains sufficient pgs and the pgs sufficient OSD residency to be effective, of course) yehuda, can you provide your formal ack here (adjusting the above as appropriate)? Matt
(In reply to Matt Benjamin (redhat) from comment #7) > Yes, there is a ceph.conf option bucket_default_quota_max_objects that > should be changed from it's default value of "inifinite/no limit" to "100K * > <the chosen number of bucket shards>" > sorry, from its present value of "<no value>" which has an internal value of "-1"--the effect of which is "infinite/no limit"
Thanks Matt for the clarification. I'm not a big fan of hardcoding any value into the installer. To me it's up to the person installing the product to configure it properly, even more when it comes to configuration Ceph options based on the deployment topology. So what I'm proposing is to use the "ceph_conf_overrides" variable. Unfortunately we have a bug in the module generating the template when we use variable while declaring sections, normally you would do something like: ceph_conf_overrides: "client.rgw.{{ hostvars[inventory_hostname]['ansible_hostname'] }}": "bucket default quota max objects": "YOUR_DESIRED_VALUE" If you look I already reported the issue to the maintainer of the module, hopefully we can have this fix soon: https://github.com/ceph/ceph-ansible/pull/1018 Thanks!
@seb - This hard-coding is a one time exception that we want to hard code. This issue with increasing indexes is causing cluster down scenarios in customer deployments. So we want to hardcode the value for now as a safe guard, if the issue with sections above is not fixed in time for 2.1
Alright, give me the line and I'll add it to the code and do the cherry-pick downstream but since Matt said we need to apply the following formula "100K * > <the chosen number of bucket shards>" can we really get a hardcoded value?
Uday, why is the change at https://github.com/ceph/ceph/pull/11711 not sufficient?
@ken That PR 11711 would also apply the limit to new buckets created in existing installations - For 2.1 we want to impose the limit only for new ceph installs and not on existing ones. So, this BZ.
After a meeting with RGW eng we decided to change rgw_override_bucket_index_max_shards to 16 instead of 32. Below is a PR adding those two conf vars to ceph-ansible master: https://github.com/ceph/ceph-ansible/pull/1458 Below is a PR adding those two conf vars to ceph-ansible stable-2.2 branch: https://github.com/ceph/ceph-ansible/pull/1474
Looks like doc fix needed too. Please confirm.
The issue is that default quota has 2 effects: 1. It applies a quota to every relevant op, if no user/bucket quota is set. 2. radosgw-admin uses it to set the relevant user/bucket quota by default. These are orthogonal (that is, we don't rely on radosgw-admin doing the right thing), but since radosgw-admin doesn't use the client-rgw-* config by default, it won't apply the config if it's in that section. Instead, it needs to be in the global section for both RGW and radosgw-admin to see it. Docs should probably be updated.
Backport PR merged, please move to POST.
discussed at program meeting back porting work should be merged today and in ON_QA by tomorrow
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1496