Bug 1822770 - Default openshift install requests too many CPU resources to install all components, requests of components on cluster are wrong
Summary: Default openshift install requests too many CPU resources to install all comp...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-apiserver
Version: 4.3.z
Hardware: s390x
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.3.z
Assignee: Michal Fojtik
QA Contact: Ke Wang
URL:
Whiteboard: multi-arch
: 1820432 (view as bug list)
Depends On: 1812709
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-04-09 19:40 UTC by Jeremy Poulin
Modified: 2020-05-20 13:48 UTC (History)
20 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1812583
Environment:
Last Closed: 2020-05-20 13:47:53 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-kube-apiserver-operator pull 825 0 None closed Bug 1822770: Normalize CPU requests on masters 2021-02-15 18:42:16 UTC
Github openshift cluster-kube-controller-manager-operator pull 395 0 None closed [release-4.3] Bug 1822770: Normalize CPU requests on masters 2021-02-15 18:42:16 UTC
Github openshift cluster-kube-scheduler-operator pull 239 0 None closed [release-4.3] Bug 1822770: Normalize CPU requests on masters 2021-02-15 18:42:16 UTC
Github openshift cluster-openshift-apiserver-operator pull 351 0 None closed Bug 1822770: Normalize CPU requests on masters 2021-02-15 18:42:16 UTC
Red Hat Product Errata RHBA-2020:2129 0 None None None 2020-05-20 13:48:08 UTC

Comment 1 Scott Dodson 2020-04-13 18:00:50 UTC
*** Bug 1820432 has been marked as a duplicate of this bug. ***

Comment 2 Scott Dodson 2020-04-13 18:03:14 UTC
Assigning over to Group B lead to coordinate backporting of all PRs on this bug's blocker bug. Ultimately it spans multiple components so just assigning as it had the majority of the work.

Comment 6 Ke Wang 2020-05-14 10:05:06 UTC
Verified with OCP 4.3.0-0.nightly-2020-05-13-220846, steps see below,

$ ns="openshift-kube-apiserver"
$ podname=$(oc get pods -n $ns | grep kube-apiserver | head -1 | cut -d " " -f1)
$ oc get pod -n $ns $podname -o json | jq .spec.containers[0].resources
{
  "requests": {
    "cpu": "300m",
    "memory": "1Gi"
  }
}

expected cpu request is "300m".  Per https://bugzilla.redhat.com/show_bug.cgi?id=1812583#c2 ,and then the remaining components should have requests that consume no more than 270m,

- Check all kube-apiserver pods, 
$ oc adm top pods -n openshift-kube-apiserver
NAME                                                      CPU(cores)   MEMORY(bytes)   
kube-apiserver-6fmwc-m-0.c.openshift-qe.internal   275m         1066Mi          
kube-apiserver-6fmwc-m-1.c.openshift-qe.internal   180m         928Mi           
kube-apiserver-6fmwc-m-2.c.openshift-qe.internal   188m         966Mi    

- Check all requests on each master are less than 3 cores(3000m) as expected.

$ oc describe nodes -l node-role.kubernetes.io/master= | grep -i Allocated -A 5
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource                   Requests      Limits
  --------                   --------      ------
  cpu                        1592m (45%)   0 (0%)
  memory                     4279Mi (30%)  512Mi (3%)
--
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource                   Requests      Limits
  --------                   --------      ------
  cpu                        1712m (48%)   0 (0%)
  memory                     5049Mi (36%)  512Mi (3%)
--
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource                   Requests      Limits
  --------                   --------      ------
  cpu                        1672m (47%)   0 (0%)
  memory                     4659Mi (33%)  512Mi (3%)

All is well, so moved to verified.

Comment 8 errata-xmlrpc 2020-05-20 13:47:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2129


Note You need to log in before you can comment on or make changes to this bug.