Bug 1379397 - [DOCS] Request to include information on proper number of shards to configure when using rgw bucket sharding
Summary: [DOCS] Request to include information on proper number of shards to configure...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Documentation
Version: 1.3.2
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: rc
: 2.1
Assignee: Bara Ancincova
QA Contact: Ramakrishnan Periyasamy
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-09-26 15:04 UTC by Mike Hackett
Modified: 2020-01-17 15:58 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-28 09:39:02 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1377875 0 high CLOSED [support] OSD recovery causes pause in IO which lasts longer than expected 2021-02-22 00:41:40 UTC

Internal Links: 1377875

Description Mike Hackett 2016-09-26 15:04:36 UTC
Description of problem:

We have seen issues where an improper number of rgw bucket shards are configured leading to performance issues and issues during recovery. Current documentation only highlights possible performance issues when large numbers of objects are placed into buckets. But does not detail recommendations when deciding on number of shards to configure, or how to determine proper number of shards.

Current doc detailing bucket sharding:

https://access.redhat.com/documentation/en/red-hat-ceph-storage/1.3/single/object-gateway-guide-for-red-hat-enterprise-linux/#bucket_sharding

Version-Release number of selected component (if applicable):
1.3.x

Comment 8 Mike Hackett 2016-11-02 20:01:42 UTC
Hello Bara,

So in 1.3 RGW used Federated Gateways for replicating data across multiple clusters in which regions are used and in 2.0 it has changed to Multi-site in which regions are converted to zonegroups. 

So for [1] in the 1.3 docs it should be labeled "federated configuration". In 2.0 it should be labeled "multi-site configuration".

The steps would not be the same as the region.json which is updated with the bucket_index_max_shards setting in the configuration for that region would be need to be updated in the zonegroup.json in multi-site configurations. 

For [2] the "rgw_override_bucket_index_max_shards" is a global setting which enables bucket index sharding on all new buckets created after a value is given to this setting.

The "bucket_index_max_shards" is set to achieve a consistent shard counts for zones in a region or zonegroup for failover.

@Yehuda 

do you have any additional comments or concerns?

Comment 11 Ramakrishnan Periyasamy 2016-11-04 10:10:37 UTC
Thanks Bara, for providing backup links.

Doc is updated with formula to calculate shards and max shards value. Moving this bug to verified state.


Note You need to log in before you can comment on or make changes to this bug.