Bug 2253013

Summary: Ceph storage pool created with pg_num and pgp_num 1; osd_pool_default_pg_num is 32, must set deviceClass on all pools
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Brenda McLaren <bmclaren>
Component: ocs-operatorAssignee: Malay Kumar parida <mparida>
Status: CLOSED ERRATA QA Contact: Aviad Polak <apolak>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 4.13CC: almartin, apolak, asriram, badhikar, bmclaren, etamir, gjose, hnallurv, kelwhite, lgangava, mmanjuna, nberry, nigoyal, odf-bz-bot, paarora, rlaberin, skatiyar, srai, srozen, tdesala, tnielsen
Target Milestone: ---Flags: almartin: needinfo-
Target Release: ODF 4.17.0   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: 4.17.0-84 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of:
: 2300016 2300021 2300022 2300023 (view as bug list) Environment:
Last Closed: 2024-10-30 14:26:05 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2297295, 2300016, 2300018, 2300021, 2300022, 2300023, 2300332, 2301588, 2306496    
Attachments:
Description Flags
Storage Class screenshot
none
Create BlockPool screenshot
none
mgr log from 03-19-2024 when replica2-pool was created. none

Description Brenda McLaren 2023-12-05 15:39:06 UTC
Description of problem (please be detailed as possible and provide log
snippests):

When creating a new storage class and creating a new pool, the pool is created with pg_num set to 1 even when the osd_pool_default_pg_num is 32.  What's the reasoning behind this?  Given the performance implications, this seems counter-intuitive.  No?


bash-5.1$ ceph osd pool ls detail | grep ocs-storagecluster-cephrbd-replica2-pool
pool 13 'ocs-storagecluster-cephrbd-replica2-pool' replicated size 2 min_size 1 crush_rule 12 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 98 flags hashpspool,selfmanaged_snaps stripe_width 0 compression_mode none application rbd

bash-5.1$ ceph config get mon osd_pool_default_pg_num
32

Version of all relevant components (if applicable):

OCP v4.13
Ceph v6.1

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

Pool is usable but performance is less than desired.

Is there any workaround available to the best of your knowledge?

Yes, use the Rook/Ceph toolbox to increase the number of PGs in the pool to desired number.


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?

1 - create new storage class; when selecting the pool, selected Create New Pool.
 
Can this issue reproducible?

Yes, every new pool that is created when creating a new storage class.

Can this issue reproduce from the UI?

Yes

Actual results:

pg_num and pgp_num are 1

Expected results:

pg_num and pgp_num should be set to the size of osd_pool_default_pg_num.

Comment 4 Brenda McLaren 2024-01-03 18:33:38 UTC
Created attachment 2007075 [details]
Storage Class screenshot

Comment 5 Brenda McLaren 2024-01-03 18:34:15 UTC
Created attachment 2007076 [details]
Create BlockPool screenshot

Comment 73 errata-xmlrpc 2024-10-30 14:26:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.17.0 Security, Enhancement, & Bug Fix Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:8676

Comment 74 Red Hat Bugzilla 2025-02-28 04:25:05 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days