Bug 1664942
Summary: | [cloud-CA] autoscaler couldn't scale up | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | sunzhaohua <zhsun> |
Component: | Cloud Compute | Assignee: | Jan Chaloupka <jchaloup> |
Status: | CLOSED ERRATA | QA Contact: | sunzhaohua <zhsun> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | unspecified | CC: | jhou, zhsun |
Target Milestone: | --- | ||
Target Release: | 4.1.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-06-04 10:41:42 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
sunzhaohua
2019-01-10 06:19:05 UTC
Hi sunzhaohua, can you share the machineset CRD definition? `kubectl get crd machinesets.cluster.k8s.io -o yaml` will do. To confirm if the providerSpec field is defined or missing. Thanks Verified. In the new version, I didn't reproduce this issue. Cluster can scale up and down noramlly. If reproduced, will reopen a bug and check the CRD definition. $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.0.0-0.alpha-2019-01-15-001217 True False 2h Cluster version is 4.0.0-0.alpha-2019-01-15-001217 $ oc get machine NAME INSTANCE STATE TYPE REGION ZONE AGE zhsun-master-0 i-080a7bb622af5dbf7 running m4.large us-east-2 us-east-2a 29m zhsun-master-1 i-086bb1037011b5e66 running m4.large us-east-2 us-east-2b 29m zhsun-master-2 i-044ba37ef3c01df49 running m4.large us-east-2 us-east-2c 29m zhsun-worker-us-east-2a-5s7wd i-0e37fa6f833672972 running m4.large us-east-2 us-east-2a 28m zhsun-worker-us-east-2a-8lszv i-019e3f765a1149f66 running m4.large us-east-2 us-east-2a 5m zhsun-worker-us-east-2a-dsqsj i-062f9d90e4e545117 running m4.large us-east-2 us-east-2a 5m zhsun-worker-us-east-2a-gmgx2 i-027057005cf3c4263 running m4.large us-east-2 us-east-2a 5m zhsun-worker-us-east-2a-z5drr i-0af94036444067f47 running m4.large us-east-2 us-east-2a 5m zhsun-worker-us-east-2b-z8wkp i-096d82f8a0ad0050a running m4.large us-east-2 us-east-2b 28m zhsun-worker-us-east-2c-kfns2 i-0c1dc48eb2b1d2346 running m4.large us-east-2 us-east-2c 28m $ oc get node NAME STATUS ROLES AGE VERSION ip-10-0-129-168.us-east-2.compute.internal Ready worker 27m v1.11.0+c69f926354 ip-10-0-134-248.us-east-2.compute.internal Ready worker 4m v1.11.0+c69f926354 ip-10-0-134-252.us-east-2.compute.internal Ready worker 4m v1.11.0+c69f926354 ip-10-0-134-67.us-east-2.compute.internal Ready worker 5m v1.11.0+c69f926354 ip-10-0-139-238.us-east-2.compute.internal Ready worker 4m v1.11.0+c69f926354 ip-10-0-15-49.us-east-2.compute.internal Ready master 37m v1.11.0+c69f926354 ip-10-0-151-196.us-east-2.compute.internal Ready worker 27m v1.11.0+c69f926354 ip-10-0-171-213.us-east-2.compute.internal Ready worker 27m v1.11.0+c69f926354 ip-10-0-20-128.us-east-2.compute.internal Ready master 37m v1.11.0+c69f926354 ip-10-0-36-74.us-east-2.compute.internal Ready master 37m v1.11.0+c69f926354 [szh@localhost installer]$ oc logs -f cluster-autoscaler-default-56c9cd4b6d-vt7d8 I0115 03:50:15.993741 1 leaderelection.go:187] attempting to acquire leader lease openshift-cluster-api/cluster-autoscaler... I0115 03:50:16.056782 1 leaderelection.go:196] successfully acquired lease openshift-cluster-api/cluster-autoscaler I0115 03:51:50.058660 1 scale_up.go:584] Scale-up: setting group openshift-cluster-api/zhsun-worker-us-east-2a size to 5 I0115 04:08:07.807776 1 scale_down.go:791] Scale-down: removing empty node ip-10-0-134-67.us-east-2.compute.internal I0115 04:08:07.807922 1 scale_down.go:791] Scale-down: removing empty node ip-10-0-139-238.us-east-2.compute.internal I0115 04:08:07.807998 1 scale_down.go:791] Scale-down: removing empty node ip-10-0-134-252.us-east-2.compute.internal I0115 04:08:07.809036 1 scale_down.go:791] Scale-down: removing empty node ip-10-0-134-248.us-east-2.compute.internal Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758 |