Bug 2275222 - In Replica-1 data always goes to one particular osd and never goes to the additional osds present for a failure domain
Summary: In Replica-1 data always goes to one particular osd and never goes to the add...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.16
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ODF 4.16.0
Assignee: Malay Kumar parida
QA Contact: Aviad Polak
URL:
Whiteboard:
Depends On:
Blocks: 2276339
TreeView+ depends on / blocked
 
Reported: 2024-04-16 05:55 UTC by Malay Kumar parida
Modified: 2024-07-17 13:19 UTC (History)
2 users (show)

Fixed In Version: 4.16.0-81
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 2276339 (view as bug list)
Environment:
Last Closed: 2024-07-17 13:19:33 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 2566 0 None open Bug 2275222: [release-4.16] Specify pg & pgm num to distribute data across OSDs for replica-1 pools 2024-04-18 08:24:08 UTC
Red Hat Product Errata RHSA-2024:4591 0 None None None 2024-07-17 13:19:40 UTC

Description Malay Kumar parida 2024-04-16 05:55:06 UTC
In Replica-1 we support increasing the number of osds per failure domain. But even after the number of osds per failure domain is increased the data always goes to one particular osd. This results in a large imbalance of data among the osds in a failure domain.

This happens because the PG & PGP number stays at 1 always for the replica-1 pools

pool 5 'ocs-storagecluster-cephblockpool-us-east-1b' replicated size 1 min_size 1 crush_rule 8 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 126 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
pool 6 'ocs-storagecluster-cephblockpool-us-east-1c' replicated size 1 min_size 1 crush_rule 10 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 128 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
pool 7 'ocs-storagecluster-cephblockpool-us-east-1a' replicated size 1 min_size 1 crush_rule 13 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 123 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd

Is there any workaround available to the best of your knowledge?
Yes, disable the reconciliation of cephblockpool and add 
spec:
  parameters:
    pg_num: '16'
    pgp_num: '16'

Comment 7 errata-xmlrpc 2024-07-17 13:19:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.16.0 security, enhancement & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:4591


Note You need to log in before you can comment on or make changes to this bug.