Bug 1540526 - HPA controller was unable to get the target's current scale: no matches for /, Kind=DeploymentConfig
Summary: HPA controller was unable to get the target's current scale: no matches for /...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: 3.9.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 3.9.0
Assignee: Solly Ross
QA Contact: DeShuai Ma
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-01-31 10:02 UTC by DeShuai Ma
Modified: 2018-03-28 14:25 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-03-28 14:24:57 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:0489 0 None None None 2018-03-28 14:25:20 UTC

Description DeShuai Ma 2018-01-31 10:02:44 UTC
Description of problem:
The hpa generated by `oc autoscal` command  can't get metrics, always failed with "FailedGetScale  the HPA controller was unable to get the target's current scale: no matches for /, Kind=DeploymentConfig"

Version-Release number of selected component (if applicable):
openshift v3.9.0-0.34.0
kubernetes v1.9.1+a0ce1bc657
etcd 3.2.8

How reproducible:
Always


Steps to Reproduce:
1.Create a dc
# oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/hpa/dc-hello-openshift.yaml -n dma

2.Create the hpa by `oc autoscale` command
oc autoscale dc/hello-openshift --min=2 --max=10 -n dma

3.Check the hpa status
# oc get hpa -n dma
NAME              REFERENCE                          TARGETS           MINPODS   MAXPODS   REPLICAS   AGE
hello-openshift   DeploymentConfig/hello-openshift   <unknown> / 80%   2         10        0          34s
# oc describe hpa hello-openshift -n dma
Name:                                                  hello-openshift
Namespace:                                             dma
Labels:                                                <none>
Annotations:                                           <none>
CreationTimestamp:                                     Wed, 31 Jan 2018 09:51:13 +0000
Reference:                                             DeploymentConfig/hello-openshift
Metrics:                                               ( current / target )
  resource cpu on pods  (as a percentage of request):  <unknown> / 80%
Min replicas:                                          2
Max replicas:                                          10
Conditions:
  Type         Status  Reason          Message
  ----         ------  ------          -------
  AbleToScale  False   FailedGetScale  the HPA controller was unable to get the target's current scale: no matches for /, Kind=DeploymentConfig
Events:
  Type     Reason          Age   From                       Message
  ----     ------          ----  ----                       -------
  Warning  FailedGetScale  14s   horizontal-pod-autoscaler  no matches for /, Kind=DeploymentConfig

# oc get hpa hello-openshift -n dma -o yaml
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  annotations:
    autoscaling.alpha.kubernetes.io/conditions: '[{"type":"AbleToScale","status":"False","lastTransitionTime":"2018-01-31T09:51:43Z","reason":"FailedGetScale","message":"the
      HPA controller was unable to get the target''s current scale: no matches for
      /, Kind=DeploymentConfig"}]'
  creationTimestamp: 2018-01-31T09:51:13Z
  name: hello-openshift
  namespace: dma
  resourceVersion: "36303"
  selfLink: /apis/autoscaling/v1/namespaces/dma/horizontalpodautoscalers/hello-openshift
  uid: 45f4727e-066c-11e8-991b-fa163e00917e
spec:
  maxReplicas: 10
  minReplicas: 2
  scaleTargetRef:
    apiVersion: v1
    kind: DeploymentConfig
    name: hello-openshift
  targetCPUUtilizationPercentage: 80
status:
  currentReplicas: 0
  desiredReplicas: 0


Actual results:
3. Failed to get metrics.

Expected results:
3. Should get metrics success

Additional info:
After update spec.scaleTargetRef.apiVersion to "apps.openshift.io/v1", the hpa can get metrics successfully.

Comment 1 Solly Ross 2018-01-31 16:46:41 UTC
Yeah, looks like `oc autoscale` is generating an HPA with the old deploymentconfig group version, which isn't understandable by most discovery-based clients -- we need to fill in an actual group.  Wonder why `oc autoscale` isn't filling in the new group name.  Is this an old client?  Will try to repro locally and see whats going on.

Comment 2 Solly Ross 2018-01-31 19:24:33 UTC
Yep, looks like we're prefering /oapi ATM, which is causing issues.  Will talk to the master and/or cli teams to figure out the best way forward.

Comment 3 Solly Ross 2018-01-31 20:16:17 UTC
(FWIW, `oc autoscaling dc.apps.openshift.io/foo` works fine, it's just the form that's unqualified by a group name that doesn't work).

Comment 4 Seth Jennings 2018-01-31 23:40:29 UTC
Origin PR:
https://github.com/openshift/origin/pull/18380

Comment 6 weiwei jiang 2018-02-22 06:05:46 UTC
Checked with 
# openshift version 
openshift v3.9.0-0.47.0
kubernetes v1.9.1+a0ce1bc657
etcd 3.2.8

and can not reproduce this issue now.

Comment 9 errata-xmlrpc 2018-03-28 14:24:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0489


Note You need to log in before you can comment on or make changes to this bug.