The web console was incorrectly assigning extensions/v1beta1 as the apiVersion when creating HPA resources regardless of the actual group of the scale target. So potentially we were generating any of the following as scale targets: extensions/v1beta1.DeploymentConfig extensions/v1beta1.Deployment extensions/v1beta1.ReplicationController extensions/v1beta1.ReplicaSet These HPA need to be migrated on upgrade to use the correct apiVersion for the target resource. +++ This bug was initially created as a clone of Bug #1540916 +++ Description of problem: should not show dc doesn't exist when its hpa created by web console Version-Release number of selected component (if applicable): OpenShift Master: v3.9.0-0.24.0 Kubernetes Master: v1.9.1+a0ce1bc657 oc v3.9.0-0.31.0 How reproducible: Always Steps to Reproduce: 1. for project1: $ oc new-app openshift/hello-openshift --name myapp $ oc autoscale dc myapp --max=4 $ oc status 2. for project2: $ oc new-app openshift/hello-openshift --name myapp login web console, add hpa for dc myapp $ oc status 3. for project1 & project2: $ oc get hap myapp -o yaml Actual results: 1.$ oc status no error info 2.$ oc status Errors: * hpa/myapp is attempting to scale DeploymentConfig/myapp, which doesn't exist ... 3. hpa yaml file in project1: scaleTargetRef: apiVersion: v1 kind: DeploymentConfig name: hello-openshift hpa yaml file in project1: scaleTargetRef: apiVersion: extensions/v1beta1 kind: DeploymentConfig name: hello-openshift Expected results: should work well without no exist error like CLI. Additional info: cli fixed in: https://bugzilla.redhat.com/show_bug.cgi?id=1534956#c2 --- Additional comment from Samuel Padgett on 2018-02-01 08:17:04 EST --- https://github.com/openshift/origin-web-console/pull/2748 --- Additional comment from Samuel Padgett on 2018-02-07 10:41:23 EST --- The PR from comment #1 is replaced by https://github.com/openshift/origin-web-console/pull/2776
PR up at https://github.com/openshift/origin/pull/18517
3.9.3 doesn't have the fix, will check in v3.9.4 when it's ready to test
Issue 1: Should support --initial flag(don't report error unknown flag: --initial)in step4 Issue 2: migrate legacy HPAs should not bring more errors
Above was checked on # oc version oc v3.9.4 kubernetes v1.9.1+a0ce1bc657 features: Basic-Auth GSSAPI Kerberos SPNEGO Server <server> openshift v3.9.4 kubernetes v1.9.1+a0ce1bc657
Issue 1: no it shouldn't, the helptext is incorrect. Will put PR up for that. Issue 2: Nothing's broken, except the object graph library: https://github.com/openshift/origin/blob/master/pkg/oc/graph/kubegraph/edges.go#L254-L266. Hardcoding things is rarely the right thing to do. At any rate, https://github.com/openshift/origin/pull/18926 fixes issue 1, and issue 2 needs a fix to the graph library, but it's not a bug in the migrate command AFAICT. Validation steps should actually check that the HPA is working and has the correct scale target ref, not *just* that `oc status` doesn't complain.
filed https://github.com/openshift/origin/issues/18927 for issue 2
Thanks Solly, I checked the autoscaling function works well despite of the errors in "oc status". I would wait PR 18926 merged to verify this bug For "oc status" issue, I think I should open a separate bug to track, WDYT?
short-term fix for `oc status` up at https://github.com/openshift/origin/pull/18950 bug to track `oc status`: https://github.com/openshift/origin/issues/18927
bug 1554624 was opened to track the "oc status" fix in https://github.com/openshift/origin/issues/18927 because issue are hard to track for release and easily missed.
Per Derek, Solly's fixes will be reviewed and applied in 3.9.z.
Note to QE: Read https://bugzilla.redhat.com/show_bug.cgi?id=1543043#c10 for guidance on verifying this. I'm moving https://github.com/openshift/origin/pull/18926 to fix the cosmetic issue in the 'migrate legacy-hpa' subcommand help but is not part of the verification of this bug.
Tested on OCP v3.10.0-0.58.0 Prepare dc, deployment, RS, rc, and create hpa for each resource from web console seprately Checked that "oc status" has no error about hpa scale ref now. Here is the resource apiversion and hpa ref apiversion, including before migrating and after migrating. 1.For dc $ oc get dc myrundc -o yaml |grep apiVersion -A 2 apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: Before migrating: $ oc get hpa myrundc -o yaml |grep scaleTargetRef -A 3 scaleTargetRef: apiVersion: v1 kind: DeploymentConfig name: myrundc After migrating: $ oc get hpa myrundc -o yaml |grep scaleTargetRef -A 3 scaleTargetRef: apiVersion: apps.openshift.io/v1 kind: DeploymentConfig name: myrundc 2.For deployment $ oc get deployment hello-openshift -o yaml |grep apiVersion -A 2 apiVersion: extensions/v1beta1 kind: Deployment metadata: Before migrating: $ oc get hpa hello-openshift -o yaml |grep scaleTargetRef -A 3 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: hello-openshift After migrating: $ oc get hpa hello-openshift -o yaml |grep scaleTargetRef -A 3 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: hello-openshift 3.For RS $ oc get rs frontend -o yaml |grep apiVersion -A 2 apiVersion: extensions/v1beta1 kind: ReplicaSet metadata: Before migrating: $ oc get hpa frontend -o yaml |grep scaleTargetRef -A 3 scaleTargetRef: apiVersion: extensions/v1beta1 kind: ReplicaSet name: frontend After migrating: $ oc get hpa frontend -o yaml |grep scaleTargetRef -A 3 scaleTargetRef: apiVersion: apps/v1 kind: ReplicaSet name: frontend 4.For rc $ oc get rc myrunrc -o yaml |grep apiVersion -A 2 apiVersion: v1 kind: ReplicationController metadata: Before migrating: $ oc get hpa myrunrc -o yaml |grep scaleTargetRef -A 3 scaleTargetRef: apiVersion: v1 kind: ReplicationController name: myrunrc After migrating: $ oc get hpa myrunrc -o yaml |grep scaleTargetRef -A 3 scaleTargetRef: apiVersion: v1 kind: ReplicationController name: myrunrc Are these the expected results?
Yep, those are expected. In general (at the moment): - anything from OpenShift itself (/oapi) should be migrated to aggregated API version (e.g. `v1.DeploymentConfig` --> `apps.openshift.io/v1.DeploymentConfig`) - anything in `extensions` (e.g. `extensions/v1beta1.ReplicaSet`) should be migrated to the proper equivalent in a non-extensions API group (e.g. `apps/v1.ReplicaSet`) - anything with a completely incorrect group-version should be fixed (e.g. `extensions/v1beta1.DeploymentConfig`) - anything else stays the same
When create app from image, and set hpa at the same time, on the page "/create/fromimage?", after creation, the hpa has incorrect ref apiversion and could not word normally: $ oc get hpa pytest -o yaml |grep scaleTargetRef -A 3 scaleTargetRef: apiVersion: extensions/v1beta1 kind: DeploymentConfig name: pytest $ oc get hpa pytest DeploymentConfig/pytest <unknown>/18% 1 4 0 1m Check on the events page, there is warning info: pytest Horizontal Pod Autoscaler Warning Failed Get Scale no matches for kind "DeploymentConfig" in group "extensions"
That's not a bug with this code. That's a bug with the dashboard. Please track that against the dashboard, and verify this bug if migrate works.
According Comment 20 and Comment 21, the migrate works well, will verify this bug, and move issue in Comment 22 in a separate bug.