Bug 1886817

Summary: Disable OCS deployment If Total CPU is <24 or Total memory < 66 in selected Nodes in UI
Product: OpenShift Container Platform Reporter: Neha Berry <nberry>
Component: Console Storage PluginAssignee: Bipul Adhikari <badhikar>
Status: CLOSED WONTFIX QA Contact: Raz Tamir <ratamir>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.6CC: aos-bugs, kmurarka, nthomas, ocs-bugs, scuppett
Target Milestone: ---   
Target Release: 4.7.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-12-13 13:39:08 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Screenshot from UI on selecting 3 nodes none

Description Neha Berry 2020-10-09 13:02:18 UTC
Created attachment 1720264 [details]
Screenshot from UI on selecting 3 nodes

Description of problem:
---------------------------------
Currently, in Create Storage cluster page, if the aggregated CPU <24 and memory is <66 GiB, the creation of storage cluster proceeds. The warning message in the UI is following, which may confuse users that atleast minimal deployment would work and setup would be up.

```
The selected nodes do not match the OCS storage cluster recommended requirements of an aggregated 42 CPUs and 102 GiB of RAM. If the selection cannot be modified, a minimal cluster will be deployed.
```

But ideally, OCS deployment should be blocked and no attempt for minimal deployment should also be triggered in this case.

Reason: Some pods would be in pending state and deployment will never succeed, not even minimal deployment(with reduced CPU for RGW, OSD and MDS)

Version-Release number of selected component (if applicable):
-----------------------------------------------
OCS 4.6  and OCP 4.6 and above

How reproducible:
-----------------
Always

Steps to Reproduce:
1. Create an OCP cluster with worker nodes, say 4 CPU and 16 GB memory. 

2. In create storage cluster page, select minimum 3 worker nodes, making sure that total CPU is still less than 24

.
3. Check that we are able to click on create, even though ultimately OCS installation wont succeed and pods would be in pending state due to "Insufficient CPU"


Actual results:
---------------------
Storage cluster creation proceeds but does not succeed due to Insufficient CPU  for PODs


Expected results:
----------------------
UI should disable Creation of storagecluster if the aggregated config is less than minimal requirement (1 OSD cluster)



Additional info:
---------------------
$ oc get pods -o wide -n openshift-storage|grep -v Running
NAME                                                              READY   STATUS      RESTARTS   AGE   IP             NODE                           NOMINATED NODE   READINESS GATES
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-898765765brpl   0/1     Pending     0          51m   <none>         <none>                         <none>           <none>
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-5df979496vppt   0/1     Pending     0          51m   <none>         <none>                         <none>           <none>
rook-ceph-osd-prepare-ocs-deviceset-gp2-0-data-0-xdqbz-mdc2r      0/1     Completed   0          52m   10.128.2.21    ip-10-0-160-43.ec2.internal    <none>           <none>
rook-ceph-osd-prepare-ocs-deviceset-gp2-1-data-0-fp25p-8qm8q      0/1     Completed   0          52m   10.131.0.17    ip-10-0-162-122.ec2.internal   <none>           <none>
rook-ceph-osd-prepare-ocs-deviceset-gp2-2-data-0-cvlp8-qpvpr      0/1     Completed   0          52m   10.129.2.28    ip-10-0-225-47.ec2.internal    <none>           <none>

----------------------------------------------------

$ oc get pods -o wide -n openshift-storage|grep rook-ceph
rook-ceph-crashcollector-ip-10-0-160-43-55d7cf6798-wz6zw          1/1     Running     0          53m   10.128.2.20    ip-10-0-160-43.ec2.internal    <none>           <none>
rook-ceph-crashcollector-ip-10-0-162-122-dff975798-25497          1/1     Running     0          54m   10.131.0.20    ip-10-0-162-122.ec2.internal   <none>           <none>
rook-ceph-crashcollector-ip-10-0-225-47-58c86c4c88-lhnm4          1/1     Running     0          54m   10.129.2.27    ip-10-0-225-47.ec2.internal    <none>           <none>
rook-ceph-drain-canary-ip-10-0-160-43.ec2.internal-7dfdd6cw2m2m   1/1     Running     0          52m   10.128.2.22    ip-10-0-160-43.ec2.internal    <none>           <none>
rook-ceph-drain-canary-ip-10-0-162-122.ec2.internal-6785f4txcdf   1/1     Running     0          52m   10.131.0.18    ip-10-0-162-122.ec2.internal   <none>           <none>
rook-ceph-drain-canary-ip-10-0-225-47.ec2.internal-77f949csnbwk   1/1     Running     0          52m   10.129.2.29    ip-10-0-225-47.ec2.internal    <none>           <none>
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-898765765brpl   0/1     Pending     0          52m   <none>         <none>                         <none>           <none>
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-5df979496vppt   0/1     Pending     0          52m   <none>         <none>                         <none>           <none>
rook-ceph-mgr-a-6d6c7b5dcc-2g2lz                                  1/1     Running     0          53m   10.131.0.16    ip-10-0-162-122.ec2.internal   <none>           <none>
rook-ceph-mon-a-95fc68d88-jf75v                                   1/1     Running     0          54m   10.131.0.15    ip-10-0-162-122.ec2.internal   <none>           <none>
rook-ceph-mon-b-86497d6d5f-vpjnf                                  1/1     Running     0          54m   10.129.2.26    ip-10-0-225-47.ec2.internal    <none>           <none>
rook-ceph-mon-c-7f66df684b-gm9lc                                  1/1     Running     0          53m   10.128.2.19    ip-10-0-160-43.ec2.internal    <none>           <none>
rook-ceph-operator-6f4458c7c7-tv7w8                               1/1     Running     0          74m   10.131.0.5     ip-10-0-162-122.ec2.internal   <none>           <none>
rook-ceph-osd-0-544658dc6-mztgh                                   1/1     Running     0          52m   10.128.2.23    ip-10-0-160-43.ec2.internal    <none>           <none>
rook-ceph-osd-1-5944897dd-bdnr7                                   1/1     Running     0          52m   10.131.0.19    ip-10-0-162-122.ec2.internal   <none>           <none>
rook-ceph-osd-2-78cf556fb7-b5cfp                                  1/1     Running     0          52m   10.129.2.30    ip-10-0-225-47.ec2.internal    <none>           <none>
rook-ceph-osd-prepare-ocs-deviceset-gp2-0-data-0-xdqbz-mdc2r      0/1     Completed   0          53m   10.128.2.21    ip-10-0-160-43.ec2.internal    <none>           <none>
rook-ceph-osd-prepare-ocs-deviceset-gp2-1-data-0-fp25p-8qm8q      0/1     Completed   0          53m   10.131.0.17    ip-10-0-162-122.ec2.internal   <none>           <none>
rook-ceph-osd-prepare-ocs-deviceset-gp2-2-data-0-cvlp8-qpvpr      0/1     Completed   0          53m   10.129.2.28    ip-10-0-225-47.ec2.internal    <none>           <none>
[nberry@localhost ocs47-ui-bug]$

Comment 1 Stephen Cuppett 2020-10-09 15:38:41 UTC
Setting target release to the active development release (4.7.0). For fixes, requested and required on previous releases, clones will be created for those release maintenance streams where applicable once identified.