Bug 1886817 - Disable OCS deployment If Total CPU is <24 or Total memory < 66 in selected Nodes in UI
Summary: Disable OCS deployment If Total CPU is <24 or Total memory < 66 in selected N...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Console Storage Plugin
Version: 4.6
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.7.0
Assignee: Bipul Adhikari
QA Contact: Raz Tamir
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-10-09 13:02 UTC by Neha Berry
Modified: 2020-12-13 13:38 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-12-13 13:39:08 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Screenshot from UI on selecting 3 nodes (130.31 KB, image/png)
2020-10-09 13:02 UTC, Neha Berry
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1886541 0 unspecified CLOSED UI should fallback to minimal deployment only after total CPU < 30 || totalMemory < 72 GiB for initial deployment 2021-02-22 00:41:40 UTC

Internal Links: 1886541

Description Neha Berry 2020-10-09 13:02:18 UTC
Created attachment 1720264 [details]
Screenshot from UI on selecting 3 nodes

Description of problem:
---------------------------------
Currently, in Create Storage cluster page, if the aggregated CPU <24 and memory is <66 GiB, the creation of storage cluster proceeds. The warning message in the UI is following, which may confuse users that atleast minimal deployment would work and setup would be up.

```
The selected nodes do not match the OCS storage cluster recommended requirements of an aggregated 42 CPUs and 102 GiB of RAM. If the selection cannot be modified, a minimal cluster will be deployed.
```

But ideally, OCS deployment should be blocked and no attempt for minimal deployment should also be triggered in this case.

Reason: Some pods would be in pending state and deployment will never succeed, not even minimal deployment(with reduced CPU for RGW, OSD and MDS)

Version-Release number of selected component (if applicable):
-----------------------------------------------
OCS 4.6  and OCP 4.6 and above

How reproducible:
-----------------
Always

Steps to Reproduce:
1. Create an OCP cluster with worker nodes, say 4 CPU and 16 GB memory. 

2. In create storage cluster page, select minimum 3 worker nodes, making sure that total CPU is still less than 24

.
3. Check that we are able to click on create, even though ultimately OCS installation wont succeed and pods would be in pending state due to "Insufficient CPU"


Actual results:
---------------------
Storage cluster creation proceeds but does not succeed due to Insufficient CPU  for PODs


Expected results:
----------------------
UI should disable Creation of storagecluster if the aggregated config is less than minimal requirement (1 OSD cluster)



Additional info:
---------------------
$ oc get pods -o wide -n openshift-storage|grep -v Running
NAME                                                              READY   STATUS      RESTARTS   AGE   IP             NODE                           NOMINATED NODE   READINESS GATES
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-898765765brpl   0/1     Pending     0          51m   <none>         <none>                         <none>           <none>
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-5df979496vppt   0/1     Pending     0          51m   <none>         <none>                         <none>           <none>
rook-ceph-osd-prepare-ocs-deviceset-gp2-0-data-0-xdqbz-mdc2r      0/1     Completed   0          52m   10.128.2.21    ip-10-0-160-43.ec2.internal    <none>           <none>
rook-ceph-osd-prepare-ocs-deviceset-gp2-1-data-0-fp25p-8qm8q      0/1     Completed   0          52m   10.131.0.17    ip-10-0-162-122.ec2.internal   <none>           <none>
rook-ceph-osd-prepare-ocs-deviceset-gp2-2-data-0-cvlp8-qpvpr      0/1     Completed   0          52m   10.129.2.28    ip-10-0-225-47.ec2.internal    <none>           <none>

----------------------------------------------------

$ oc get pods -o wide -n openshift-storage|grep rook-ceph
rook-ceph-crashcollector-ip-10-0-160-43-55d7cf6798-wz6zw          1/1     Running     0          53m   10.128.2.20    ip-10-0-160-43.ec2.internal    <none>           <none>
rook-ceph-crashcollector-ip-10-0-162-122-dff975798-25497          1/1     Running     0          54m   10.131.0.20    ip-10-0-162-122.ec2.internal   <none>           <none>
rook-ceph-crashcollector-ip-10-0-225-47-58c86c4c88-lhnm4          1/1     Running     0          54m   10.129.2.27    ip-10-0-225-47.ec2.internal    <none>           <none>
rook-ceph-drain-canary-ip-10-0-160-43.ec2.internal-7dfdd6cw2m2m   1/1     Running     0          52m   10.128.2.22    ip-10-0-160-43.ec2.internal    <none>           <none>
rook-ceph-drain-canary-ip-10-0-162-122.ec2.internal-6785f4txcdf   1/1     Running     0          52m   10.131.0.18    ip-10-0-162-122.ec2.internal   <none>           <none>
rook-ceph-drain-canary-ip-10-0-225-47.ec2.internal-77f949csnbwk   1/1     Running     0          52m   10.129.2.29    ip-10-0-225-47.ec2.internal    <none>           <none>
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-898765765brpl   0/1     Pending     0          52m   <none>         <none>                         <none>           <none>
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-5df979496vppt   0/1     Pending     0          52m   <none>         <none>                         <none>           <none>
rook-ceph-mgr-a-6d6c7b5dcc-2g2lz                                  1/1     Running     0          53m   10.131.0.16    ip-10-0-162-122.ec2.internal   <none>           <none>
rook-ceph-mon-a-95fc68d88-jf75v                                   1/1     Running     0          54m   10.131.0.15    ip-10-0-162-122.ec2.internal   <none>           <none>
rook-ceph-mon-b-86497d6d5f-vpjnf                                  1/1     Running     0          54m   10.129.2.26    ip-10-0-225-47.ec2.internal    <none>           <none>
rook-ceph-mon-c-7f66df684b-gm9lc                                  1/1     Running     0          53m   10.128.2.19    ip-10-0-160-43.ec2.internal    <none>           <none>
rook-ceph-operator-6f4458c7c7-tv7w8                               1/1     Running     0          74m   10.131.0.5     ip-10-0-162-122.ec2.internal   <none>           <none>
rook-ceph-osd-0-544658dc6-mztgh                                   1/1     Running     0          52m   10.128.2.23    ip-10-0-160-43.ec2.internal    <none>           <none>
rook-ceph-osd-1-5944897dd-bdnr7                                   1/1     Running     0          52m   10.131.0.19    ip-10-0-162-122.ec2.internal   <none>           <none>
rook-ceph-osd-2-78cf556fb7-b5cfp                                  1/1     Running     0          52m   10.129.2.30    ip-10-0-225-47.ec2.internal    <none>           <none>
rook-ceph-osd-prepare-ocs-deviceset-gp2-0-data-0-xdqbz-mdc2r      0/1     Completed   0          53m   10.128.2.21    ip-10-0-160-43.ec2.internal    <none>           <none>
rook-ceph-osd-prepare-ocs-deviceset-gp2-1-data-0-fp25p-8qm8q      0/1     Completed   0          53m   10.131.0.17    ip-10-0-162-122.ec2.internal   <none>           <none>
rook-ceph-osd-prepare-ocs-deviceset-gp2-2-data-0-cvlp8-qpvpr      0/1     Completed   0          53m   10.129.2.28    ip-10-0-225-47.ec2.internal    <none>           <none>
[nberry@localhost ocs47-ui-bug]$

Comment 1 Stephen Cuppett 2020-10-09 15:38:41 UTC
Setting target release to the active development release (4.7.0). For fixes, requested and required on previous releases, clones will be created for those release maintenance streams where applicable once identified.


Note You need to log in before you can comment on or make changes to this bug.