Hide Forgot
Description of problem: In project openshift-kube-apiserver and openshift-kube-controller-manager there were so many 'installer' pods with Completed status. Version-Release number of selected component (if applicable): luster version is 4.0.0-8 How reproducible: Steps to Reproduce: 1. Create Next-gen installer env in aws 2. Check the projects: openshift-kube-apiserver and openshift-kube-controller-manager; [root@dhcp-140-138 vendor]# oc get po -n openshift-kube-controller-manager NAME READY STATUS RESTARTS AGE installer-1-ip-10-0-0-29.ec2.internal 0/1 Completed 0 1d installer-1-ip-10-0-25-163.ec2.internal 0/1 OOMKilled 0 1d installer-1-ip-10-0-42-244.ec2.internal 0/1 Completed 0 1d openshift-kube-controller-manager-ip-10-0-0-29.ec2.internal 1/1 Running 0 1d openshift-kube-controller-manager-ip-10-0-25-163.ec2.internal 1/1 Running 0 1d openshift-kube-controller-manager-ip-10-0-42-244.ec2.internal 1/1 Running 0 1d [root@dhcp-140-138 vendor]# oc get po -n openshift-kube-apiserver NAME READY STATUS RESTARTS AGE installer-1-ip-10-0-0-29.ec2.internal 0/1 Completed 0 1d installer-1-ip-10-0-25-163.ec2.internal 0/1 Completed 0 1d installer-1-ip-10-0-42-244.ec2.internal 0/1 Completed 0 1d installer-2-ip-10-0-0-29.ec2.internal 0/1 Completed 0 1d installer-2-ip-10-0-25-163.ec2.internal 0/1 Completed 0 1d installer-2-ip-10-0-42-244.ec2.internal 0/1 Completed 0 1d openshift-kube-apiserver-ip-10-0-0-29.ec2.internal 1/1 Running 0 1d openshift-kube-apiserver-ip-10-0-25-163.ec2.internal 1/1 Running 0 1d openshift-kube-apiserver-ip-10-0-42-244.ec2.internal 1/1 Running 0 1d Actual results: 2. There are so many 'installer' pods with Completed status. Expected results: 2. Should delete the pods correctly Additional info:
Confirmed with latest OCP , the issue has fixed and will keep the most recent 5 by default: [root@preserved-yinzhou-rhel-1 auth]# oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.0.0-0.nightly-2019-02-20-194410 True False 6h19m Cluster version is 4.0.0-0.nightly-2019-02-20-194410
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758