Bug 1662090 - Redundant 'installer' pods in project openshift-kube-apiserver and openshift-kube-controller-manager
Summary: Redundant 'installer' pods in project openshift-kube-apiserver and openshift-...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Master
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.1.0
Assignee: Mike Dame
QA Contact: Xingxing Xia
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-26 06:04 UTC by zhou ying
Modified: 2019-06-04 10:41 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:41:27 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 0 None None None 2019-06-04 10:41:33 UTC

Description zhou ying 2018-12-26 06:04:50 UTC
Description of problem:
In project openshift-kube-apiserver and openshift-kube-controller-manager there were so many 'installer' pods with Completed status. 

Version-Release number of selected component (if applicable):
luster version is 4.0.0-8

How reproducible:


Steps to Reproduce:
1. Create Next-gen installer env in aws
2. Check the projects: openshift-kube-apiserver and openshift-kube-controller-manager;

[root@dhcp-140-138 vendor]# oc get po -n openshift-kube-controller-manager
NAME                                                            READY     STATUS      RESTARTS   AGE
installer-1-ip-10-0-0-29.ec2.internal                           0/1       Completed   0          1d
installer-1-ip-10-0-25-163.ec2.internal                         0/1       OOMKilled   0          1d
installer-1-ip-10-0-42-244.ec2.internal                         0/1       Completed   0          1d
openshift-kube-controller-manager-ip-10-0-0-29.ec2.internal     1/1       Running     0          1d
openshift-kube-controller-manager-ip-10-0-25-163.ec2.internal   1/1       Running     0          1d
openshift-kube-controller-manager-ip-10-0-42-244.ec2.internal   1/1       Running     0          1d

[root@dhcp-140-138 vendor]#  oc get po -n openshift-kube-apiserver
NAME                                                   READY     STATUS      RESTARTS   AGE
installer-1-ip-10-0-0-29.ec2.internal                  0/1       Completed   0          1d
installer-1-ip-10-0-25-163.ec2.internal                0/1       Completed   0          1d
installer-1-ip-10-0-42-244.ec2.internal                0/1       Completed   0          1d
installer-2-ip-10-0-0-29.ec2.internal                  0/1       Completed   0          1d
installer-2-ip-10-0-25-163.ec2.internal                0/1       Completed   0          1d
installer-2-ip-10-0-42-244.ec2.internal                0/1       Completed   0          1d
openshift-kube-apiserver-ip-10-0-0-29.ec2.internal     1/1       Running     0          1d
openshift-kube-apiserver-ip-10-0-25-163.ec2.internal   1/1       Running     0          1d
openshift-kube-apiserver-ip-10-0-42-244.ec2.internal   1/1       Running     0          1d


Actual results:
2. There are so many 'installer' pods with Completed status.


Expected results:
2. Should delete the pods correctly



Additional info:

Comment 2 zhou ying 2019-02-21 09:57:54 UTC
Confirmed with latest OCP , the issue has fixed and will keep the most recent 5 by default:

[root@preserved-yinzhou-rhel-1 auth]# oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.0.0-0.nightly-2019-02-20-194410   True        False         6h19m   Cluster version is 4.0.0-0.nightly-2019-02-20-194410

Comment 5 errata-xmlrpc 2019-06-04 10:41:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.