Description of problem: The hpa generated by `oc autoscal` command can't get metrics, always failed with "FailedGetScale the HPA controller was unable to get the target's current scale: no matches for /, Kind=DeploymentConfig" Version-Release number of selected component (if applicable): openshift v3.9.0-0.34.0 kubernetes v1.9.1+a0ce1bc657 etcd 3.2.8 How reproducible: Always Steps to Reproduce: 1.Create a dc # oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/hpa/dc-hello-openshift.yaml -n dma 2.Create the hpa by `oc autoscale` command oc autoscale dc/hello-openshift --min=2 --max=10 -n dma 3.Check the hpa status # oc get hpa -n dma NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE hello-openshift DeploymentConfig/hello-openshift <unknown> / 80% 2 10 0 34s # oc describe hpa hello-openshift -n dma Name: hello-openshift Namespace: dma Labels: <none> Annotations: <none> CreationTimestamp: Wed, 31 Jan 2018 09:51:13 +0000 Reference: DeploymentConfig/hello-openshift Metrics: ( current / target ) resource cpu on pods (as a percentage of request): <unknown> / 80% Min replicas: 2 Max replicas: 10 Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: no matches for /, Kind=DeploymentConfig Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetScale 14s horizontal-pod-autoscaler no matches for /, Kind=DeploymentConfig # oc get hpa hello-openshift -n dma -o yaml apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: annotations: autoscaling.alpha.kubernetes.io/conditions: '[{"type":"AbleToScale","status":"False","lastTransitionTime":"2018-01-31T09:51:43Z","reason":"FailedGetScale","message":"the HPA controller was unable to get the target''s current scale: no matches for /, Kind=DeploymentConfig"}]' creationTimestamp: 2018-01-31T09:51:13Z name: hello-openshift namespace: dma resourceVersion: "36303" selfLink: /apis/autoscaling/v1/namespaces/dma/horizontalpodautoscalers/hello-openshift uid: 45f4727e-066c-11e8-991b-fa163e00917e spec: maxReplicas: 10 minReplicas: 2 scaleTargetRef: apiVersion: v1 kind: DeploymentConfig name: hello-openshift targetCPUUtilizationPercentage: 80 status: currentReplicas: 0 desiredReplicas: 0 Actual results: 3. Failed to get metrics. Expected results: 3. Should get metrics success Additional info: After update spec.scaleTargetRef.apiVersion to "apps.openshift.io/v1", the hpa can get metrics successfully.
Yeah, looks like `oc autoscale` is generating an HPA with the old deploymentconfig group version, which isn't understandable by most discovery-based clients -- we need to fill in an actual group. Wonder why `oc autoscale` isn't filling in the new group name. Is this an old client? Will try to repro locally and see whats going on.
Yep, looks like we're prefering /oapi ATM, which is causing issues. Will talk to the master and/or cli teams to figure out the best way forward.
(FWIW, `oc autoscaling dc.apps.openshift.io/foo` works fine, it's just the form that's unqualified by a group name that doesn't work).
Origin PR: https://github.com/openshift/origin/pull/18380
Checked with # openshift version openshift v3.9.0-0.47.0 kubernetes v1.9.1+a0ce1bc657 etcd 3.2.8 and can not reproduce this issue now.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0489