Bug 1543043

Summary: Need to migrate incorrect group/version for HPAs created by web console
Product: OpenShift Container Platform Reporter: Samuel Padgett <spadgett>
Component: NodeAssignee: Seth Jennings <sjenning>
Status: CLOSED CURRENTRELEASE QA Contact: Yadan Pei <yapei>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.9.0CC: aos-bugs, avagarwa, hasha, jokerman, mmccomas, sjenning, spadgett, vlaad, yanpzhan, yapei
Target Milestone: ---   
Target Release: 3.10.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1540916 Environment:
Last Closed: 2018-10-08 13:10:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1540916    
Bug Blocks:    

Description Samuel Padgett 2018-02-07 15:45:58 UTC
The web console was incorrectly assigning extensions/v1beta1 as the apiVersion when creating HPA resources regardless of the actual group of the scale target. So potentially we were generating any of the following as scale targets:

extensions/v1beta1.DeploymentConfig
extensions/v1beta1.Deployment
extensions/v1beta1.ReplicationController
extensions/v1beta1.ReplicaSet

These HPA need to be migrated on upgrade to use the correct apiVersion for the target resource.


+++ This bug was initially created as a clone of Bug #1540916 +++

Description of problem:
 should not show dc doesn't exist when its hpa created by web console

Version-Release number of selected component (if applicable):
 OpenShift Master: v3.9.0-0.24.0
 Kubernetes Master: v1.9.1+a0ce1bc657

 oc v3.9.0-0.31.0

How reproducible:
Always

Steps to Reproduce:
1.
for project1:
$ oc new-app openshift/hello-openshift --name myapp
$ oc autoscale dc myapp --max=4
$ oc status
2.
for project2:
$ oc new-app openshift/hello-openshift --name myapp
login web console, add hpa for dc myapp
$ oc status

3. 
for project1 & project2:
 $ oc get hap myapp -o yaml

Actual results:
1.$ oc status
no error info
 
2.$ oc status

Errors:
  * hpa/myapp is attempting to scale DeploymentConfig/myapp, which doesn't exist
...

3.
hpa yaml file in project1:

  scaleTargetRef:
    apiVersion: v1
    kind: DeploymentConfig
    name: hello-openshift

hpa yaml file in project1:
  
    scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: DeploymentConfig
    name: hello-openshift  

Expected results:
 should work well without no exist error like CLI.

Additional info:
 cli fixed in: https://bugzilla.redhat.com/show_bug.cgi?id=1534956#c2

--- Additional comment from Samuel Padgett on 2018-02-01 08:17:04 EST ---

https://github.com/openshift/origin-web-console/pull/2748

--- Additional comment from Samuel Padgett on 2018-02-07 10:41:23 EST ---

The PR from comment #1 is replaced by

https://github.com/openshift/origin-web-console/pull/2776

Comment 1 Solly Ross 2018-02-07 22:32:31 UTC
PR up at https://github.com/openshift/origin/pull/18517

Comment 6 Yadan Pei 2018-03-09 01:00:48 UTC
3.9.3 doesn't have the fix, will check in v3.9.4 when it's ready to test

Comment 8 Yadan Pei 2018-03-09 08:47:29 UTC
Issue 1: Should support --initial flag(don't report error unknown flag: --initial)in step4

Issue 2: migrate legacy HPAs should not bring more errors

Comment 9 Yadan Pei 2018-03-09 08:51:46 UTC
Above was checked on # oc version
oc v3.9.4
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server <server>
openshift v3.9.4
kubernetes v1.9.1+a0ce1bc657

Comment 10 Solly Ross 2018-03-09 21:49:10 UTC
Issue 1: no it shouldn't, the helptext is incorrect.  Will put PR up for that.
Issue 2: Nothing's broken, except the object graph library: https://github.com/openshift/origin/blob/master/pkg/oc/graph/kubegraph/edges.go#L254-L266.  Hardcoding things is rarely the right thing to do.

At any rate, https://github.com/openshift/origin/pull/18926 fixes issue 1, and issue 2 needs a fix to the graph library, but it's not a bug in the migrate command AFAICT.

Validation steps should actually check that the HPA is working and has the correct scale target ref, not *just* that `oc status` doesn't complain.

Comment 11 Solly Ross 2018-03-09 22:00:00 UTC
filed https://github.com/openshift/origin/issues/18927 for issue 2

Comment 12 Yadan Pei 2018-03-12 03:03:43 UTC
Thanks Solly, I checked the autoscaling function works well despite of the errors in "oc status".

I would wait PR 18926 merged to verify this bug

For "oc status" issue, I think I should open a separate bug to track, WDYT?

Comment 13 Solly Ross 2018-03-12 19:28:31 UTC
short-term fix for `oc status` up at https://github.com/openshift/origin/pull/18950
bug to track `oc status`: https://github.com/openshift/origin/issues/18927

Comment 14 Yadan Pei 2018-03-13 03:05:10 UTC
bug 1554624 was opened to track the "oc status" fix in https://github.com/openshift/origin/issues/18927 because issue are hard to track for release and easily missed.

Comment 15 N. Harrison Ripps 2018-03-15 13:48:07 UTC
Per Derek, Solly's fixes will be reviewed and applied in 3.9.z.

Comment 17 Seth Jennings 2018-04-27 20:17:13 UTC
Note to QE:

Read https://bugzilla.redhat.com/show_bug.cgi?id=1543043#c10 for guidance on verifying this.

I'm moving https://github.com/openshift/origin/pull/18926 to fix the cosmetic issue in the 'migrate legacy-hpa' subcommand help but is not part of the verification of this bug.

Comment 20 Yanping Zhang 2018-06-05 10:14:54 UTC
Tested on OCP v3.10.0-0.58.0
Prepare dc, deployment, RS, rc, and create hpa for each resource from web console seprately
Checked that "oc status" has no error about hpa scale ref now.
Here is the resource apiversion and hpa ref apiversion, including before migrating and after migrating.
1.For dc
$ oc get dc myrundc -o yaml |grep apiVersion -A 2
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
Before migrating:
$ oc get hpa myrundc -o yaml |grep scaleTargetRef -A 3
  scaleTargetRef:
    apiVersion: v1
    kind: DeploymentConfig
    name: myrundc
After migrating:
$ oc get hpa myrundc -o yaml |grep scaleTargetRef -A 3
  scaleTargetRef:
    apiVersion: apps.openshift.io/v1
    kind: DeploymentConfig
    name: myrundc

2.For deployment
$ oc get deployment hello-openshift -o yaml |grep apiVersion -A 2
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
Before migrating:
$ oc get hpa hello-openshift -o yaml |grep scaleTargetRef -A 3
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: hello-openshift
After migrating:
$ oc get hpa hello-openshift -o yaml |grep scaleTargetRef -A 3
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: hello-openshift

3.For RS
$ oc get rs frontend -o yaml |grep apiVersion -A 2
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
Before migrating:
$ oc get hpa frontend -o yaml |grep scaleTargetRef -A 3
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: ReplicaSet
    name: frontend
After migrating:
$ oc get hpa frontend -o yaml |grep scaleTargetRef -A 3
  scaleTargetRef:
    apiVersion: apps/v1
    kind: ReplicaSet
    name: frontend

4.For rc
$ oc get rc myrunrc -o yaml |grep apiVersion -A 2
apiVersion: v1
kind: ReplicationController
metadata:
Before migrating:
$ oc get hpa myrunrc -o yaml |grep scaleTargetRef -A 3
  scaleTargetRef:
    apiVersion: v1
    kind: ReplicationController
    name: myrunrc
After migrating:
$ oc get hpa myrunrc -o yaml |grep scaleTargetRef -A 3
  scaleTargetRef:
    apiVersion: v1
    kind: ReplicationController
    name: myrunrc

Are these the expected results?

Comment 21 Solly Ross 2018-06-05 16:50:36 UTC
Yep, those are expected.  In general (at the moment):

- anything from OpenShift itself (/oapi) should be migrated to aggregated API version (e.g. `v1.DeploymentConfig` --> `apps.openshift.io/v1.DeploymentConfig`)

- anything in `extensions` (e.g. `extensions/v1beta1.ReplicaSet`) should be migrated to the proper equivalent in a non-extensions API group (e.g. `apps/v1.ReplicaSet`)

- anything with a completely incorrect group-version should be fixed (e.g. `extensions/v1beta1.DeploymentConfig`)

- anything else stays the same

Comment 22 Yanping Zhang 2018-06-06 03:33:04 UTC
When create app from image, and set hpa at the same time, on the page "/create/fromimage?", after creation, the hpa has incorrect ref apiversion and could not word normally:
$ oc get hpa pytest -o yaml |grep scaleTargetRef -A 3
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: DeploymentConfig
    name: pytest
$ oc get hpa
pytest                 DeploymentConfig/pytest           <unknown>/18%   1         4         0          1m

Check on the events page, there is warning info:
pytest 	Horizontal Pod Autoscaler 	Warning 	Failed Get Scale  	no matches for kind "DeploymentConfig" in group "extensions"

Comment 23 Solly Ross 2018-06-11 17:22:23 UTC
That's not a bug with this code.  That's a bug with the dashboard.  Please track that against the dashboard, and verify this bug if migrate works.

Comment 24 Yanping Zhang 2018-06-12 02:59:50 UTC
According Comment 20 and Comment 21, the migrate works well, will verify this bug, and move issue in Comment 22 in a separate bug.