Bug 1822770

Summary: Default openshift install requests too many CPU resources to install all components, requests of components on cluster are wrong
Product: OpenShift Container Platform Reporter: Jeremy Poulin <jpoulin>
Component: kube-apiserverAssignee: Michal Fojtik <mfojtik>
Status: CLOSED ERRATA QA Contact: Ke Wang <kewang>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 4.3.zCC: adahiya, aos-bugs, bparees, ccoleman, esimard, Holger.Wolf, hwolf, jchaloup, jeder, lakshmi.ravichandran1, maszulik, mfojtik, miwilson, rphillips, sdodson, wjiang, wking, wvoesch, xxia, zyu
Target Milestone: ---Keywords: ServiceDeliveryImpact
Target Release: 4.3.z   
Hardware: s390x   
OS: Unspecified   
Whiteboard: multi-arch
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1812583 Environment:
Last Closed: 2020-05-20 13:47:53 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1812709    
Bug Blocks:    

Comment 1 Scott Dodson 2020-04-13 18:00:50 UTC
*** Bug 1820432 has been marked as a duplicate of this bug. ***

Comment 2 Scott Dodson 2020-04-13 18:03:14 UTC
Assigning over to Group B lead to coordinate backporting of all PRs on this bug's blocker bug. Ultimately it spans multiple components so just assigning as it had the majority of the work.

Comment 6 Ke Wang 2020-05-14 10:05:06 UTC
Verified with OCP 4.3.0-0.nightly-2020-05-13-220846, steps see below,

$ ns="openshift-kube-apiserver"
$ podname=$(oc get pods -n $ns | grep kube-apiserver | head -1 | cut -d " " -f1)
$ oc get pod -n $ns $podname -o json | jq .spec.containers[0].resources
{
  "requests": {
    "cpu": "300m",
    "memory": "1Gi"
  }
}

expected cpu request is "300m".  Per https://bugzilla.redhat.com/show_bug.cgi?id=1812583#c2 ,and then the remaining components should have requests that consume no more than 270m,

- Check all kube-apiserver pods, 
$ oc adm top pods -n openshift-kube-apiserver
NAME                                                      CPU(cores)   MEMORY(bytes)   
kube-apiserver-6fmwc-m-0.c.openshift-qe.internal   275m         1066Mi          
kube-apiserver-6fmwc-m-1.c.openshift-qe.internal   180m         928Mi           
kube-apiserver-6fmwc-m-2.c.openshift-qe.internal   188m         966Mi    

- Check all requests on each master are less than 3 cores(3000m) as expected.

$ oc describe nodes -l node-role.kubernetes.io/master= | grep -i Allocated -A 5
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource                   Requests      Limits
  --------                   --------      ------
  cpu                        1592m (45%)   0 (0%)
  memory                     4279Mi (30%)  512Mi (3%)
--
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource                   Requests      Limits
  --------                   --------      ------
  cpu                        1712m (48%)   0 (0%)
  memory                     5049Mi (36%)  512Mi (3%)
--
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource                   Requests      Limits
  --------                   --------      ------
  cpu                        1672m (47%)   0 (0%)
  memory                     4659Mi (33%)  512Mi (3%)

All is well, so moved to verified.

Comment 8 errata-xmlrpc 2020-05-20 13:47:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2129