Bug 1533790
Summary: | hpav2 still get metrics from https:heapster though use rest-clients | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | DeShuai Ma <dma> |
Component: | Node | Assignee: | Seth Jennings <sjenning> |
Status: | CLOSED ERRATA | QA Contact: | Weinan Liu <weinliu> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 3.9.0 | CC: | aos-bugs, jokerman, mmccomas, pkanthal, sjenning, wjiang |
Target Milestone: | --- | Keywords: | Reopened |
Target Release: | 3.11.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | No Doc Update | |
Doc Text: |
undefined
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2018-10-11 07:19:06 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
DeShuai Ma
2018-01-12 08:54:46 UTC
Because we use custom HPA setup logic, that switch won't work yet, unfortunately. I had a PR in to fix it temporarily (https://github.com/openshift/origin/pull/18035), but it looks like we're going to wait and just remove our custom logic entirely once we're installing metrics-server by default. I need this fix, then test the hpa v2beta1 feature. As there is pr to fix the issue, mark bug status to MODIFIED other than NOTABUG. Since the PR is not merged, so move back to MODIFIED The move to metrics-server is deferred to 3.11. Issue verified to be fixed. [root@ip-172-18-11-67 ~]# oc version oc v3.11.0-0.24.0 kubernetes v1.11.0+d4cacc0 features: Basic-Auth GSSAPI Kerberos SPNEGO Server https://ip-172-18-11-67.ec2.internal:8443 openshift v3.11.0-0.24.0 kubernetes v1.11.0+d4cacc0 [root@ip-172-18-11-67 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.5 (Maipo) [root@ip-172-18-11-67 ~]# [root@ip-172-18-11-67 ~]# oc get --raw /apis/metrics.k8s.io/v1beta1 {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]} oc create -f https://raw.githubusercontent.com/mdshuai/testfile-openshift/master/k8s/autoscaling/hpa-v2beta1/rc.yaml -n dma1 oc create -f https://raw.githubusercontent.com/mdshuai/testfile-openshift/master/k8s/autoscaling/hpa-v2beta1/resource-metrics-cpu.yaml -n dma1 [root@ip-172-18-11-67 ~]# oc get hpa.v2beta1.autoscaling -n dma1 NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE resource-cpu ReplicationController/hello-openshift 0%/80% 2 10 2 39m [root@ip-172-18-11-67 ~]# oc describe hpa.v2beta1.autoscaling resource-hpa -n dma1 Error from server (NotFound): horizontalpodautoscalers.autoscaling "resource-hpa" not found [root@ip-172-18-11-67 ~]# oc describe hpa.v2beta1.autoscaling resource-cpu -n dma1 Name: resource-cpu Namespace: dma1 Labels: <none> Annotations: <none> CreationTimestamp: Tue, 28 Aug 2018 06:28:34 -0400 Reference: ReplicationController/hello-openshift Metrics: ( current / target ) resource cpu on pods (as a percentage of request): 0% (0) / 80% Min replicas: 2 Max replicas: 10 ReplicationController pods: 2 current / 2 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request) ScalingLimited True TooFewReplicas the desired replica count is increasing faster than the maximum scale rate Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 39m horizontal-pod-autoscaler New size: 2; reason: Current number of replicas below Spec.MinReplicas [root@ip-172-18-11-67 ~]# Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:2652 |