Bug 1777379 - [IPI][Baremetal] No Memory and CPU requests defined for Haproxy infra pod
Summary: [IPI][Baremetal] No Memory and CPU requests defined for Haproxy infra pod
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Machine Config Operator
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 4.4.0
Assignee: Yossi Boaron
QA Contact: Victor Voronkov
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-11-27 14:27 UTC by Yossi Boaron
Modified: 2020-05-04 11:18 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-04 11:17:52 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift machine-config-operator pull 1292 0 None closed Bug 1777379: [Baremetal] add CPU and memory resources for Haproxy pod 2020-06-21 12:36:15 UTC
Red Hat Product Errata RHBA-2020:0581 0 None None None 2020-05-04 11:18:16 UTC

Description Yossi Boaron 2019-11-27 14:27:45 UTC
Description of problem:
In [IPI][BAREMETAL] the self-hosted LB is implemented using static pod, no cpu and memory requirements are defined for this pod.




Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

[kni@worker-0 dev-scripts]$ oc get pod  -n   openshift-kni-infra haproxy-master-0 -o yaml 

apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubernetes.io/config.hash: 03c647957f4bef9ad79b54ab51fa7a82
    kubernetes.io/config.mirror: 03c647957f4bef9ad79b54ab51fa7a82
    kubernetes.io/config.seen: "2019-11-22T13:04:42.458836766Z"
    kubernetes.io/config.source: file
  creationTimestamp: "2019-11-22T13:09:28Z"
  labels:
    app: kni-infra-api-lb
  name: haproxy-master-0
  namespace: openshift-kni-infra
  resourceVersion: "996536"
  selfLink: /api/v1/namespaces/openshift-kni-infra/pods/haproxy-master-0
  uid: 1d52a300-52f6-4f2f-9a50-6e7d10d497e9
spec:
  containers:
  - command:
    - /bin/bash
    - -c
    - |
      #/bin/bash
      reload_haproxy()
      {
        old_pids=$(pidof haproxy)
        if [ -n "$old_pids" ]; then
            /usr/sbin/haproxy -W -db -f /etc/haproxy/haproxy.cfg  -p /var/lib/haproxy/run/haproxy.pid -x /var/lib/haproxy/run/haproxy.sock -sf $old_pids &
        else
            /usr/sbin/haproxy -W -db -f /etc/haproxy/haproxy.cfg  -p /var/lib/haproxy/run/haproxy.pid &
        fi
      }

      msg_handler()
      {
        while read -r line; do
          echo "The client send: $line"  >&2
          # currently only 'reload' msg is supported
          if [ "$line" = reload ]; then
              reload_haproxy
          fi
        done
      }
      set -ex
      declare -r haproxy_sock="/var/run/haproxy/haproxy-master.sock"
      declare -r haproxy_log_sock="/var/run/haproxy/haproxy-log.sock"
      export -f msg_handler
      export -f reload_haproxy
      rm -f "$haproxy_sock" "$haproxy_log_sock"
      socat UNIX-RECV:${haproxy_log_sock} STDOUT &
      if [ -s "/etc/haproxy/haproxy.cfg" ]; then
          /usr/sbin/haproxy -W -db -f /etc/haproxy/haproxy.cfg  -p /var/lib/haproxy/run/haproxy.pid &
      fi
      socat UNIX-LISTEN:${haproxy_sock},fork system:'bash -c msg_handler'
    image: registry.svc.ci.openshift.org/ocp/4.3-2019-11-21-195648@sha256:5afbc9a1f8cc715a837ae1e3a8dcebbb3a7ad016410c7bc8214cbb5d6951c14c
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 3
      httpGet:
        path: /healthz
        port: 50936
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    name: haproxy
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: FallbackToLogsOnError
    volumeMounts:
    - mountPath: /etc/haproxy
      name: conf-dir
    - mountPath: /var/run/haproxy
      name: run-dir
  - command:
    - monitor
    - /etc/kubernetes/kubeconfig
    - /config/haproxy.cfg.tmpl
    - /etc/haproxy/haproxy.cfg
    - --api-vip
    - 192.168.111.5
    image: registry.svc.ci.openshift.org/ocp/4.3-2019-11-21-195648@sha256:332e45a76d9a278168004b50da729a0c5468a898357f79300900407074205c75
    imagePullPolicy: IfNotPresent
    name: haproxy-monitor
    resources: {}
    securityContext:
      privileged: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: FallbackToLogsOnError
    volumeMounts:
    - mountPath: /etc/haproxy
      name: conf-dir
    - mountPath: /var/run/haproxy
      name: run-dir
    - mountPath: /config
      name: resource-dir
    - mountPath: /host
      name: chroot-host
    - mountPath: /etc/kubernetes/kubeconfig
      name: kubeconfig
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  hostNetwork: true
  nodeName: master-0
  priority: 2000001000
  priorityClassName: system-node-critical
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  terminationGracePeriodSeconds: 30
  tolerations:
  - operator: Exists
  - effect: NoExecute
    operator: Exists
  volumes:
  - hostPath:
      path: /etc/kubernetes/static-pod-resources/haproxy
      type: ""
    name: resource-dir
  - hostPath:
      path: /etc/kubernetes/kubeconfig
      type: ""
    name: kubeconfig
  - emptyDir: {}
    name: run-dir
  - hostPath:
      path: /etc/haproxy
      type: ""
    name: conf-dir
  - hostPath:
      path: /
      type: ""
    name: chroot-host
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2019-11-22T13:05:04Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2019-11-24T21:04:29Z"
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2019-11-22T13:05:24Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2019-11-22T13:05:04Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: cri-o://64eeb91cc4a17e6bb8684474ec422d0f72890267fde1afce21105317a5aae3e7
    image: registry.svc.ci.openshift.org/ocp/4.3-2019-11-21-195648@sha256:5afbc9a1f8cc715a837ae1e3a8dcebbb3a7ad016410c7bc8214cbb5d6951c14c
    imageID: registry.svc.ci.openshift.org/ocp/4.3-2019-11-21-195648@sha256:5afbc9a1f8cc715a837ae1e3a8dcebbb3a7ad016410c7bc8214cbb5d6951c14c
    lastState:
      terminated:
        containerID: cri-o://5a7fa3adcb1b3b78f2b865f264ad5a3969317148ce437aec537ae59b46f10002
        exitCode: 143
        finishedAt: "2019-11-24T19:07:07Z"
        message: |
          roxy[10]: Connect from 192.168.111.21:46244 to 192.168.111.21:50936 (health_check_http_url/HTTP)
          <134>Nov 24 19:05:31 haproxy[10]: Connect from 127.0.0.1:39090 to 127.0.0.1:7443 (main/TCP)
          <134>Nov 24 19:05:36 haproxy[10]: Connect from 192.168.111.21:46550 to 192.168.111.21:50936 (health_check_http_url/HTTP)
          <134>Nov 24 19:05:37 haproxy[10]: Connect from 127.0.0.1:39284 to 127.0.0.1:7443 (main/TCP)
          <134>Nov 24 19:05:43 haproxy[10]: Connect from 127.0.0.1:39466 to 127.0.0.1:7443 (main/TCP)
          <134>Nov 24 19:05:46 haproxy[10]: Connect from 192.168.111.21:46872 to 192.168.111.21:50936 (health_check_http_url/HTTP)
          <134>Nov 24 19:05:49 haproxy[10]: Connect from 127.0.0.1:39676 to 127.0.0.1:7443 (main/TCP)
          <134>Nov 24 19:05:55 haproxy[10]: Connect from 127.0.0.1:39858 to 127.0.0.1:7443 (main/TCP)
          <134>Nov 24 19:05:56 haproxy[10]: Connect from 192.168.111.21:47178 to 192.168.111.21:50936 (health_check_http_url/HTTP)
          <134>Nov 24 19:06:01 haproxy[10]: Connect from 127.0.0.1:40026 to 127.0.0.1:7443 (main/TCP)
          <134>Nov 24 19:06:06 haproxy[10]: Connect from 192.168.111.21:47482 to 192.168.111.21:50936 (health_check_http_url/HTTP)
          <134>Nov 24 19:06:07 haproxy[10]: Connect from 127.0.0.1:40234 to 127.0.0.1:7443 (main/TCP)
          <134>Nov 24 19:06:13 haproxy[10]: Connect from 127.0.0.1:40422 to 127.0.0.1:7443 (main/TCP)
          <134>Nov 24 19:06:16 haproxy[10]: Connect from 192.168.111.21:47792 to 192.168.111.21:50936 (health_check_http_url/HTTP)
          <134>Nov 24 19:06:19 haproxy[10]: Connect from 127.0.0.1:40608 to 127.0.0.1:7443 (main/TCP)
          <134>Nov 24 19:06:25 haproxy[10]: Connect from 127.0.0.1:40798 to 127.0.0.1:7443 (main/TCP)
          <134>Nov 24 19:06:26 haproxy[10]: Connect from 192.168.111.21:48108 to 192.168.111.21:50936 (health_check_http_url/HTTP)
          <134>Nov 24 19:06:31 haproxy[10]: Connect from 127.0.0.1:40974 to 127.0.0.1:7443 (main/TCP)
          <134>Nov 24 19:06:36 haproxy[10]: Connect from 192.168.111.21:48414 to 192.168.111.21:50936 (health_check_http_url/HTTP)
          <134>Nov 24 19:06:38 haproxy[10]: Connect from 127.0.0.1:41172 to 127.0.0.1:7443 (main/TCP)
        reason: Error
        startedAt: "2019-11-24T15:44:37Z"
    name: haproxy
    ready: true
    restartCount: 15
    started: true
    state:
      running:
        startedAt: "2019-11-24T19:07:07Z"
  - containerID: cri-o://cf24e61117135bf4bfb5bd4b3638b2c41c19a76f69ebfe0fbd9c7e727c90cb0c
    image: registry.svc.ci.openshift.org/ocp/4.3-2019-11-21-195648@sha256:332e45a76d9a278168004b50da729a0c5468a898357f79300900407074205c75
    imageID: registry.svc.ci.openshift.org/ocp/4.3-2019-11-21-195648@sha256:332e45a76d9a278168004b50da729a0c5468a898357f79300900407074205c75
    lastState: {}
    name: haproxy-monitor
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2019-11-22T13:05:24Z"
  hostIP: 192.168.111.21
  phase: Running
  podIP: 192.168.111.21
  podIPs:
  - ip: 192.168.111.21
  qosClass: BestEffort
  startTime: "2019-11-22T13:05:04Z"

Comment 2 Victor Voronkov 2020-03-12 09:10:36 UTC
Verified on 4.4.0-0.ci-2020-03-11-095511

oc get pod -n openshift-kni-infra haproxy-master-0.ocp-edge-cluster.qe.lab.redhat.com -o yaml
...
name: haproxy
    resources:
      requests:
        cpu: 100m
        memory: 200Mi
...
name: haproxy-monitor
    resources:
      requests:
        cpu: 100m
        memory: 200Mi
    securityContext:

Comment 4 errata-xmlrpc 2020-05-04 11:17:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0581


Note You need to log in before you can comment on or make changes to this bug.