Bug 1543043
Summary: | Need to migrate incorrect group/version for HPAs created by web console | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Samuel Padgett <spadgett> |
Component: | Node | Assignee: | Seth Jennings <sjenning> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Yadan Pei <yapei> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 3.9.0 | CC: | aos-bugs, avagarwa, hasha, jokerman, mmccomas, sjenning, spadgett, vlaad, yanpzhan, yapei |
Target Milestone: | --- | ||
Target Release: | 3.10.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | 1540916 | Environment: | |
Last Closed: | 2018-10-08 13:10:19 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1540916 | ||
Bug Blocks: |
Description
Samuel Padgett
2018-02-07 15:45:58 UTC
3.9.3 doesn't have the fix, will check in v3.9.4 when it's ready to test Issue 1: Should support --initial flag(don't report error unknown flag: --initial)in step4 Issue 2: migrate legacy HPAs should not bring more errors Above was checked on # oc version oc v3.9.4 kubernetes v1.9.1+a0ce1bc657 features: Basic-Auth GSSAPI Kerberos SPNEGO Server <server> openshift v3.9.4 kubernetes v1.9.1+a0ce1bc657 Issue 1: no it shouldn't, the helptext is incorrect. Will put PR up for that. Issue 2: Nothing's broken, except the object graph library: https://github.com/openshift/origin/blob/master/pkg/oc/graph/kubegraph/edges.go#L254-L266. Hardcoding things is rarely the right thing to do. At any rate, https://github.com/openshift/origin/pull/18926 fixes issue 1, and issue 2 needs a fix to the graph library, but it's not a bug in the migrate command AFAICT. Validation steps should actually check that the HPA is working and has the correct scale target ref, not *just* that `oc status` doesn't complain. filed https://github.com/openshift/origin/issues/18927 for issue 2 Thanks Solly, I checked the autoscaling function works well despite of the errors in "oc status". I would wait PR 18926 merged to verify this bug For "oc status" issue, I think I should open a separate bug to track, WDYT? short-term fix for `oc status` up at https://github.com/openshift/origin/pull/18950 bug to track `oc status`: https://github.com/openshift/origin/issues/18927 bug 1554624 was opened to track the "oc status" fix in https://github.com/openshift/origin/issues/18927 because issue are hard to track for release and easily missed. Per Derek, Solly's fixes will be reviewed and applied in 3.9.z. Note to QE: Read https://bugzilla.redhat.com/show_bug.cgi?id=1543043#c10 for guidance on verifying this. I'm moving https://github.com/openshift/origin/pull/18926 to fix the cosmetic issue in the 'migrate legacy-hpa' subcommand help but is not part of the verification of this bug. Tested on OCP v3.10.0-0.58.0 Prepare dc, deployment, RS, rc, and create hpa for each resource from web console seprately Checked that "oc status" has no error about hpa scale ref now. Here is the resource apiversion and hpa ref apiversion, including before migrating and after migrating. 1.For dc $ oc get dc myrundc -o yaml |grep apiVersion -A 2 apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: Before migrating: $ oc get hpa myrundc -o yaml |grep scaleTargetRef -A 3 scaleTargetRef: apiVersion: v1 kind: DeploymentConfig name: myrundc After migrating: $ oc get hpa myrundc -o yaml |grep scaleTargetRef -A 3 scaleTargetRef: apiVersion: apps.openshift.io/v1 kind: DeploymentConfig name: myrundc 2.For deployment $ oc get deployment hello-openshift -o yaml |grep apiVersion -A 2 apiVersion: extensions/v1beta1 kind: Deployment metadata: Before migrating: $ oc get hpa hello-openshift -o yaml |grep scaleTargetRef -A 3 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: hello-openshift After migrating: $ oc get hpa hello-openshift -o yaml |grep scaleTargetRef -A 3 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: hello-openshift 3.For RS $ oc get rs frontend -o yaml |grep apiVersion -A 2 apiVersion: extensions/v1beta1 kind: ReplicaSet metadata: Before migrating: $ oc get hpa frontend -o yaml |grep scaleTargetRef -A 3 scaleTargetRef: apiVersion: extensions/v1beta1 kind: ReplicaSet name: frontend After migrating: $ oc get hpa frontend -o yaml |grep scaleTargetRef -A 3 scaleTargetRef: apiVersion: apps/v1 kind: ReplicaSet name: frontend 4.For rc $ oc get rc myrunrc -o yaml |grep apiVersion -A 2 apiVersion: v1 kind: ReplicationController metadata: Before migrating: $ oc get hpa myrunrc -o yaml |grep scaleTargetRef -A 3 scaleTargetRef: apiVersion: v1 kind: ReplicationController name: myrunrc After migrating: $ oc get hpa myrunrc -o yaml |grep scaleTargetRef -A 3 scaleTargetRef: apiVersion: v1 kind: ReplicationController name: myrunrc Are these the expected results? Yep, those are expected. In general (at the moment): - anything from OpenShift itself (/oapi) should be migrated to aggregated API version (e.g. `v1.DeploymentConfig` --> `apps.openshift.io/v1.DeploymentConfig`) - anything in `extensions` (e.g. `extensions/v1beta1.ReplicaSet`) should be migrated to the proper equivalent in a non-extensions API group (e.g. `apps/v1.ReplicaSet`) - anything with a completely incorrect group-version should be fixed (e.g. `extensions/v1beta1.DeploymentConfig`) - anything else stays the same When create app from image, and set hpa at the same time, on the page "/create/fromimage?", after creation, the hpa has incorrect ref apiversion and could not word normally: $ oc get hpa pytest -o yaml |grep scaleTargetRef -A 3 scaleTargetRef: apiVersion: extensions/v1beta1 kind: DeploymentConfig name: pytest $ oc get hpa pytest DeploymentConfig/pytest <unknown>/18% 1 4 0 1m Check on the events page, there is warning info: pytest Horizontal Pod Autoscaler Warning Failed Get Scale no matches for kind "DeploymentConfig" in group "extensions" That's not a bug with this code. That's a bug with the dashboard. Please track that against the dashboard, and verify this bug if migrate works. According Comment 20 and Comment 21, the migrate works well, will verify this bug, and move issue in Comment 22 in a separate bug. |