Bug 2304142 - [UI] Hide the .mgr built-in ceph pool from the UI
Summary: [UI] Hide the .mgr built-in ceph pool from the UI
Keywords:
Status: CLOSED DUPLICATE of bug 2300332
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: management-console
Version: 4.16
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ODF 4.16.1
Assignee: Nishanth Thomas
QA Contact: Prasad Desala
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2024-08-12 14:06 UTC by Travis Nielsen
Modified: 2024-08-16 14:56 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-08-12 14:24:55 UTC
Embargoed:


Attachments (Terms of Use)

Description Travis Nielsen 2024-08-12 14:06:39 UTC
This bug was initially created as a copy of Bug #2303080

I am copying this bug because: 

With the 4.16.1 fix to initialize the pools so the PG autoscaler can properly scale the PGs, the .mgr CephBlockPool CR is now being created by ODF. The user has no need to configure this pool, so let's hide it from the UI.


Description of problem (please be detailed as possible and provide log
snippests):
=================================================================================
The default cephblockpool, cephfilesystem and cephobjectstore.rgw pools are created with a single PGs on 4.16.0 (4.16.0-137) and 4.16.1 (4.16.1-7).
It looks like the regression started from 4.16.0, on 4.15.5 the PGs count for the default ceph pools was set correctly.

ODF 4.16.0 and 4.16.1:
==================
--- POOLS ---
POOL                                                   ID  PGS   STORED   (DATA)  (OMAP)  OBJECTS     USED   (DATA)  (OMAP)  %USED  MAX AVAIL  QUOTA OBJECTS  QUOTA BYTES  DIRTY  USED COMPR  UNDER COMPR
ocs-storagecluster-cephobjectstore.rgw.buckets.non-ec   1    8      0 B      0 B     0 B        0      0 B      0 B     0 B      0    435 GiB            N/A          N/A    N/A         0 B          0 B
ocs-storagecluster-cephobjectstore.rgw.buckets.index    2    8      0 B      0 B     0 B       11      0 B      0 B     0 B      0    435 GiB            N/A          N/A    N/A         0 B          0 B
ocs-storagecluster-cephobjectstore.rgw.meta             3    8  2.8 KiB  2.8 KiB     0 B       14  144 KiB  144 KiB     0 B      0    435 GiB            N/A          N/A    N/A         0 B          0 B
ocs-storagecluster-cephobjectstore.rgw.control          4    8      0 B      0 B     0 B        8      0 B      0 B     0 B      0    435 GiB            N/A          N/A    N/A         0 B          0 B
.rgw.root                                               5    8  5.7 KiB  5.7 KiB     0 B       17  192 KiB  192 KiB     0 B      0    435 GiB            N/A          N/A    N/A         0 B          0 B
ocs-storagecluster-cephobjectstore.rgw.log              6    8  3.7 KiB  3.7 KiB     0 B      181  420 KiB  420 KiB     0 B      0    435 GiB            N/A          N/A    N/A         0 B          0 B
ocs-storagecluster-cephobjectstore.rgw.otp              7    8      0 B      0 B     0 B        0      0 B      0 B     0 B      0    435 GiB            N/A          N/A    N/A         0 B          0 B
ocs-storagecluster-cephblockpool                        8    1  128 MiB  128 MiB     0 B      239  384 MiB  384 MiB     0 B   0.03    435 GiB            N/A          N/A    N/A         0 B          0 B
.mgr                                                    9    1  577 KiB  577 KiB     0 B        2  1.7 MiB  1.7 MiB     0 B      0    435 GiB            N/A          N/A    N/A         0 B          0 B
ocs-storagecluster-cephobjectstore.rgw.buckets.data    10    1    1 KiB    1 KiB     0 B        1   12 KiB   12 KiB     0 B      0    435 GiB            N/A          N/A    N/A         0 B          0 B
ocs-storagecluster-cephfilesystem-metadata             11    1   61 KiB   61 KiB     0 B       24  264 KiB  264 KiB     0 B      0    435 GiB            N/A          N/A    N/A         0 B          0 B
ocs-storagecluster-cephfilesystem-data0                12    1    158 B    158 B     0 B        1   12 KiB   12 KiB     0 B      0    435 GiB            N/A          N/A    N/A         0 B          0 B


ODF 4.15.5
==========
 
--- POOLS ---
POOL                                                   ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
ocs-storagecluster-cephblockpool                        1   32   21 GiB    6.38k   63 GiB   1.21    1.7 TiB
ocs-storagecluster-cephobjectstore.rgw.buckets.non-ec   2    8      0 B        0      0 B      0    1.7 TiB
ocs-storagecluster-cephobjectstore.rgw.control          3    8      0 B        8      0 B      0    1.7 TiB
ocs-storagecluster-cephobjectstore.rgw.log              4    8  6.5 MiB      340   21 MiB      0    1.7 TiB
ocs-storagecluster-cephobjectstore.rgw.meta             5    8  3.3 KiB       17  145 KiB      0    1.7 TiB
ocs-storagecluster-cephobjectstore.rgw.otp              6    8      0 B        0      0 B      0    1.7 TiB
ocs-storagecluster-cephobjectstore.rgw.buckets.index    7    8  381 KiB       11  1.1 MiB      0    1.7 TiB
.rgw.root                                               8    8  5.7 KiB       16  180 KiB      0    1.7 TiB
ocs-storagecluster-cephobjectstore.rgw.buckets.data     9   32  189 KiB        5  600 KiB      0    1.7 TiB
ocs-storagecluster-cephfilesystem-metadata             10   16  659 MiB      226  1.9 GiB   0.04    1.7 TiB
ocs-storagecluster-cephfilesystem-data0                11   32      0 B        0      0 B      0    1.7 TiB
.mgr


pg_num and pgp_num are set to 1 in the ceph osd ls pools

pool 1 'ocs-storagecluster-cephblockpool' replicated size 3 min_size 2 crush_rule 9 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 47 flags hashpspool,selfmanaged_snaps stripe_width 0 target_size_ratio 0.49 application rbd read_balance_score 3.00
pool 2 'ocs-storagecluster-cephobjectstore.rgw.control' replicated size 3 min_size 2 crush_rule 15 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 36 flags hashpspool stripe_width 0 pg_num_min 8 application rgw read_balance_score 1.88
pool 3 '.rgw.root' replicated size 3 min_size 2 crush_rule 13 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 36 flags hashpspool stripe_width 0 pg_num_min 8 application rgw read_balance_score 1.50
pool 4 'ocs-storagecluster-cephobjectstore.rgw.log' replicated size 3 min_size 2 crush_rule 12 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 36 flags hashpspool stripe_width 0 pg_num_min 8 application rgw read_balance_score 1.50
pool 5 'ocs-storagecluster-cephobjectstore.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 16 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 36 flags hashpspool stripe_width 0 pg_num_min 8 application rgw read_balance_score 1.50
pool 6 'ocs-storagecluster-cephobjectstore.rgw.otp' replicated size 3 min_size 2 crush_rule 17 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 36 flags hashpspool stripe_width 0 pg_num_min 8 application rgw read_balance_score 1.13
pool 7 'ocs-storagecluster-cephobjectstore.rgw.buckets.non-ec' replicated size 3 min_size 2 crush_rule 14 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 35 flags hashpspool stripe_width 0 pg_num_min 8 application rgw read_balance_score 1.50
pool 8 'ocs-storagecluster-cephobjectstore.rgw.meta' replicated size 3 min_size 2 crush_rule 11 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 36 flags hashpspool stripe_width 0 pg_num_min 8 application rgw read_balance_score 1.88
pool 9 'ocs-storagecluster-cephfilesystem-metadata' replicated size 3 min_size 2 crush_rule 19 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 25 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs read_balance_score 3.00
pool 10 'ocs-storagecluster-cephobjectstore.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 21 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 37 flags hashpspool stripe_width 0 target_size_ratio 0.49 application rgw read_balance_score 3.00
pool 11 'ocs-storagecluster-cephfilesystem-data0' replicated size 3 min_size 2 crush_rule 22 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 51 flags hashpspool,selfmanaged_snaps stripe_width 0 target_size_ratio 0.49 application cephfs read_balance_score 3.00
pool 12 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 28 flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr read_balance_score 3.00

Version of all relevant components (if applicable):
4.16.0 (4.16.0-137) and 4.16.1 (4.16.1-7).

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
AFAIK, there won't be any issues accessing the pool but it may impact the performance

Is there any workaround available to the best of your knowledge?
Yes, we can increase the number of PGs in the pool to desired number from tool box pod


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1


Can this issue reproducible?
Yes


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:
Yes, it is a regression. The default ceph pool are created correctly on 4.15.5

Steps to Reproduce:
===================
1) Deploy an ODF cluster on 4.16.0 or 4.16.1 and check the default ceph pools by running ceph df command or ceph osd pool ls detail or through any command which shows the pg count for a pool

Actual results:
===============
The default cephblockpool, cephfilesystem and cephobjectstore.rgw pools are created with a single PGs (pg_num 1 pgp_num 1)

Expected results:
=================
The default ceph pools should be created with its default values as in 4.15.5


Note You need to log in before you can comment on or make changes to this bug.