Bug 1701469 - Autoscaling for Memory Utilization is not working
Summary: Autoscaling for Memory Utilization is not working
Keywords:
Status: CLOSED DUPLICATE of bug 1707785
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 3.11.z
Assignee: Joel Smith
QA Contact: Jianwei Hou
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-04-19 07:02 UTC by Sudarshan Chaudhari
Modified: 2019-05-16 13:43 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-05-16 13:43:43 UTC


Attachments (Terms of Use)

Description Sudarshan Chaudhari 2019-04-19 07:02:55 UTC
Description of problem:

Autoscaling for Memory Utilization is not working as expected. creating HPA for Memory based autoscalling is failing while looking for resource.

Following the Documentation:
https://docs.openshift.com/container-platform/3.11/dev_guide/pod_autoscaling.html#pod-autoscaling-memory


Version-Release number of selected component (if applicable):
# oc version
oc v3.11.88
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://openshift.internal.traviocp311g.lab.pnq2.cee.redhat.com:443
openshift v3.11.88
kubernetes v1.11.0+d4cacc0


How reproducible:
Always

Steps to Reproduce:

edited master-config.yaml to have:
~~~
apiServerArguments:
  runtime-config:
  - apis/autoscaling/v2beta1=true
~~~

1. Deploy app
~~~
# oc get dc,pods
NAME                                         REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/ruby-ex   1          1         1         config,image(ruby-ex:latest)

NAME                  READY     STATUS      RESTARTS   AGE
pod/ruby-ex-1-6kt4v   1/1       Running     0          2d
pod/ruby-ex-1-build   0/1       Completed   0          2d
~~~

2. Create the HPA config file:
~~~
# cat hpa.yaml 
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: hpa-resource-metrics-memory 
spec:
  scaleTargetRef:
    apiVersion: apps/v1 
    kind: DepoymentConfig 
    name: ruby-ex
  minReplicas: 1 
  maxReplicas: 10 
  metrics:
  - type: Resource
    resource:
      name: memory
      targetAverageUtilization: 50 
~~~

3. Create the autoscale:
~~~
# oc create -f hpa.yaml 
horizontalpodautoscaler.autoscaling/hpa-resource-metrics-memory created
# oc describe hpa 
Name:                                                     hpa-resource-metrics-memory
Namespace:                                                test
Labels:                                                   <none>
Annotations:                                              <none>
CreationTimestamp:                                        Fri, 19 Apr 2019 02:52:44 -0400
Reference:                                                DepoymentConfig/ruby-ex
Metrics:                                                  ( current / target )
  resource memory on pods  (as a percentage of request):  <unknown> / 50%
Min replicas:                                             1
Max replicas:                                             10
DepoymentConfig pods:                                     0 current / 0 desired
Conditions:
  Type         Status  Reason          Message
  ----         ------  ------          -------
  AbleToScale  False   FailedGetScale  the HPA controller was unable to get the target's current scale: no matches for kind "DepoymentConfig" in group "apps"
Events:
  Type     Reason          Age   From                       Message
  ----     ------          ----  ----                       -------
  Warning  FailedGetScale  9s    horizontal-pod-autoscaler  no matches for kind "DepoymentConfig" in group "apps"
~~~

4. Check what apiVersion is HPA created:
~~~
# oc get hpa -o yaml hpa-resource-metrics-memory
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  annotations:
    autoscaling.alpha.kubernetes.io/conditions: '[{"type":"AbleToScale","status":"False","lastTransitionTime":"2019-04-19T06:52:49Z","reason":"FailedGetScale","message":"the
      HPA controller was unable to get the target''s current scale: no matches for
      kind \"DepoymentConfig\" in group \"apps\""}]'
    autoscaling.alpha.kubernetes.io/metrics: '[{"type":"Resource","resource":{"name":"memory","targetAverageUtilization":50}}]'
  creationTimestamp: 2019-04-19T06:52:44Z
  name: hpa-resource-metrics-memory
  namespace: test
  resourceVersion: "2308836"
  selfLink: /apis/autoscaling/v1/namespaces/test/horizontalpodautoscalers/hpa-resource-metrics-memory
  uid: bbf89ab0-626f-11e9-bdf6-fa163ebb5f25
spec:
  maxReplicas: 10
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: DepoymentConfig
    name: ruby-ex
status:
  currentReplicas: 0
  desiredReplicas: 0
~~~


from the 2 and 4 we can see that the apiVersion of the resources are getting changes. 


Actual results:
The HPA is failing.

Expected results:
The pods should be able to autoscale based on the Memory utilization.


Additional info:

This bug the new but for OCP 3.11 where as there is the same but for OCP 3.9 which seems to be fixed:
https://bugzilla.redhat.com/show_bug.cgi?id=1540526

Comment 2 Seth Jennings 2019-05-16 13:43:43 UTC

*** This bug has been marked as a duplicate of bug 1707785 ***


Note You need to log in before you can comment on or make changes to this bug.