Bug 1942161 - Azure: machine-controller OOM
Summary: Azure: machine-controller OOM
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cloud Compute
Version: 4.8
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Michael Gugino
QA Contact: sunzhaohua
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-03-23 19:01 UTC by Michael Gugino
Modified: 2021-04-22 13:59 UTC (History)
0 users

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-04-22 13:59:55 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Michael Gugino 2021-03-23 19:01:08 UTC
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-azure-serial-4.8/1372002364097040384

Looking at the machine-controller logs, they're quite brief.  It's clear the machine-controller restarted at least twice by looking at the current and previous logs of the machine-controller.

This is not a disruptive test, so there shouldn't be any restarts of our component.

Looking at the pod details, indeed, we've restarted 5 times, at least once was due to OOM:

                "containerID": "cri-o://ab294ff8fff75e9114af6c07079dee1de688ae4c8e7bf2536183b08ebf405f46",
                "image": "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:089124a4c3e71c8e32516195ac5e50a0906affc483d547f7aa3a81571bb5b784",
                "imageID": "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:089124a4c3e71c8e32516195ac5e50a0906affc483d547f7aa3a81571bb5b784",
                "lastState": {
                    "terminated": {
                        "containerID": "cri-o://49e7e128fbd2aa610d425dd9cb0f9de19979c58cd8edd5e63ce472da39b9bd13",
                        "exitCode": 137,
                        "finishedAt": "2021-03-17T03:13:29Z",
                        "reason": "OOMKilled",
                        "startedAt": "2021-03-17T02:59:12Z"
                    }
                },
                "name": "machine-controller",
                "ready": true,
                "restartCount": 5,
                "started": true,
                "state": {
                    "running": {
                        "startedAt": "2021-03-17T03:13:30Z"
                    }
                }
            },




Other containers error'ed out at ~2:18 (MHC, MachineSet, NodeRef).  Everything lost leader election around the same time:

I0317 02:17:52.625155       1 leaderelection.go:278] failed to renew lease openshift-machine-api/cluster-api-provider-machineset-leader: timed out waiting for the condition
2021/03/17 02:17:52 leader election lost

Comment 1 Joel Speed 2021-03-24 11:31:29 UTC
This is why we shouldn't be adding limits to pods! This has already been fixed

*** This bug has been marked as a duplicate of bug 1938493 ***

Comment 2 Michael Gugino 2021-03-24 12:31:02 UTC
I'm reopening this bug.  I want to see if what the memory of this controller is doing before we close it.  It took over 30 minutes to go OOM, I want to ensure we're not leaking memory.

Comment 3 Michael Gugino 2021-04-22 13:59:55 UTC
I'm not going to get to this any time soon.


Note You need to log in before you can comment on or make changes to this bug.