Bug 1946247 - When pg limit is reached pool get created in oc database although not created in ceph level
Summary: When pg limit is reached pool get created in oc database although not created...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat OpenShift Container Storage
Classification: Red Hat Storage
Component: management-console
Version: 4.8
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Nishanth Thomas
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-04-05 14:02 UTC by Shay Rozen
Modified: 2021-04-06 10:25 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-04-06 08:58:18 UTC
Embargoed:


Attachments (Terms of Use)
pool failed to create appear in pool list as failed. (310.50 KB, image/png)
2021-04-05 14:02 UTC, Shay Rozen
no flags Details

Description Shay Rozen 2021-04-05 14:02:06 UTC
Created attachment 1769272 [details]
pool failed to create appear in pool list as failed.

Description of problem (please be detailed as possible and provide log
snippests):
When creating pool via UI when PG limit is reached pool is failed to create, but it shown in UI pool management list and also when getting cephblockpool via oc command.


Version of all relevant components (if applicable):
ocp4.8
ocs4.8 with pool management feature included.


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?



Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
3/3


Can this issue reproduce from the UI?
Only from UI

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Install OCS.
2. Create pools via pool management in web console till failure to create because of pg limit.



Actual results:
The pool that failed to create is listed in "oc get cephblockpool" and listed in the pool management section (as failed) but not created at ceph level because of the pg limit.
We had the same issue when creating the pool via storageclass page in 4.6:
https://bugzilla.redhat.com/show_bug.cgi?id=1890135


Expected results:
If pg limit is reached pool should not be created in oc database when not created in ceph. Also it shouldn't be listed in console when it failed to create. 

Additional info:
See pool as16 listed in "oc get cephblockpool" but not listed in ceph:
 oc get cephblockpool
NAME                               AGE
as09                               13m
as1                                32m
as10                               20m
as11                               20m
as12                               20m
as13                               20m
as14                               19m
as15                               19m
as16                               19m
as2                                32m
as3                                31m
as4                                31m
as5                                31m
as6                                22m
as7                                22m
as8                                21m
as9                                21m
ocs-storagecluster-cephblockpool   123m
pool-rbd-rep2-comp                 5m18s
sd                                 6m29s


ceph df
RAW STORAGE:
    CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED 
    hdd       1.5 TiB     1.5 TiB     2.6 GiB      5.6 GiB          0.37 
    TOTAL     1.5 TiB     1.5 TiB     2.6 GiB      5.6 GiB          0.37 
 
POOLS:
    POOL                                                      ID     STORED      OBJECTS     USED        %USED     MAX AVAIL 
    ocs-storagecluster-cephblockpool                           1     850 MiB         454     2.5 GiB      0.19       433 GiB 
    ocs-storagecluster-cephobjectstore.rgw.control             2         0 B           8         0 B         0       433 GiB 
    ocs-storagecluster-cephfilesystem-metadata                 3      51 KiB          24     1.5 MiB         0       433 GiB 
    ocs-storagecluster-cephfilesystem-data0                    4       158 B           1     192 KiB         0       433 GiB 
    ocs-storagecluster-cephobjectstore.rgw.meta                5     2.8 KiB          12     1.9 MiB         0       433 GiB 
    ocs-storagecluster-cephobjectstore.rgw.log                 6     4.7 KiB         344     6.6 MiB         0       433 GiB 
    ocs-storagecluster-cephobjectstore.rgw.buckets.index       7         0 B          22         0 B         0       433 GiB 
    ocs-storagecluster-cephobjectstore.rgw.buckets.non-ec      8         0 B           0         0 B         0       433 GiB 
    .rgw.root                                                  9     4.8 KiB          16     2.8 MiB         0       433 GiB 
    ocs-storagecluster-cephobjectstore.rgw.buckets.data       10       1 KiB           1     192 KiB         0       433 GiB 
    as1                                                       12         0 B           0         0 B         0       650 GiB 
    as2                                                       13         0 B           0         0 B         0       650 GiB 
    as3                                                       14         0 B           0         0 B         0       650 GiB 
    as4                                                       15         0 B           0         0 B         0       650 GiB 
    as5                                                       16         0 B           0         0 B         0       650 GiB 
    as6                                                       17         0 B           0         0 B         0       650 GiB 
    as7                                                       18         0 B           0         0 B         0       650 GiB 
    as8                                                       19         0 B           0         0 B         0       650 GiB 
    as9                                                       20         0 B           0         0 B         0       650 GiB 
    as10                                                      21         0 B           0         0 B         0       650 GiB 
    as11                                                      22         0 B           0         0 B         0       650 GiB 
    as12                                                      23         0 B           0         0 B         0       650 GiB 
    as13                                                      24         0 B           0         0 B         0       650 GiB 
    as14                                                      25         0 B           0         0 B         0       650 GiB 
    as15                                                      26         0 B           0         0 B         0       650 GiB

Comment 2 Ankush Behl 2021-04-06 08:58:18 UTC
This is the expected behavior of any k8s resource and if the status of
some resource is failing we don't delete that resource automatically.

Comment 3 Elad 2021-04-06 10:25:27 UTC
Changing to WONTFIX as the issue reported here is real


Note You need to log in before you can comment on or make changes to this bug.