Bug 1805157

Summary: [RFE] Require warning "When there is less resources in the worker node"
Product: [Red Hat Storage] Red Hat OpenShift Container Storage Reporter: Ramakrishnan Periyasamy <rperiyas>
Component: management-consoleAssignee: Nishanth Thomas <nthomas>
Status: CLOSED NOTABUG QA Contact: Elad <ebenahar>
Severity: medium Docs Contact:
Priority: unspecified    
Version: unspecifiedCC: asachan, bniver, gmeno, jefbrown, madam, nthomas, ocs-bugs, sostapov
Target Milestone: ---Keywords: FutureFeature, RFE
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-05-05 15:59:39 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Ramakrishnan Periyasamy 2020-02-20 11:42:09 UTC
Description of problem (please be detailed as possible and provide log
snippests):

RFE: When cluster node resources are already consumed and if user/customer want to create more pods then there should be WARN to add new pods or a suggestion to scale worker nodes to cluster.

During manual testing observed this problem. When there is less resources in the nodes, new app pod creations were slow and sometime app pods will move to error state due to insufficient resources. 

Additional info:
NA

Comment 2 Michael Adam 2020-02-25 15:50:25 UTC
not 4.3 material ==> to 4.4

Comment 4 Michael Adam 2020-05-04 07:48:04 UTC
@Nishanth, is this something we could address in the console?

Comment 5 Anmol Sachan 2020-05-05 15:10:21 UTC
This is default OCP behaviour. The platform tries to accommodate as many as pods that can be run. It's not possible to anticipate a pod CPU/Memory requirement until the deployment is created, and then the system checks if it can schedule the pod or not.

There are ways the admins ease out their jobs by defining the number of maximum pods that can be scheduled on a node etc. But it is purely up to the admins on how they want to use their OCP cluster.

Moreover, there are node events such as InsufficientFreeCPU & InsufficientFreeMemory to notify admins and many more (https://docs.openshift.com/container-platform/4.4/nodes/clusters/nodes-containers-events.html).

In terms of alerting, there are alerts related to overcommitting of CPU and Memory: KubeCPUOvercommit & KubeMemOvercommit. 

Thus it is up to the OCP admin on how they want to manage their resources. 

Also, IMHO this is not a bug and certainly not an OCS bug.

Comment 6 Nishanth Thomas 2020-05-05 15:59:39 UTC
Also the Node listing page on console has indications when memory/Disk/PID pressures are detected and displays the per node based utilization.
So the point here that we believe that ocp has already implemented the necessary functionalities to warn the administrator well in advance.Hence I am closing the Bz.
If you are expecting more, please re-open, explain what you are looking for and move Bz to OCP