Bug 2072311

Summary: HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x
Product: Migration Toolkit for Containers Reporter: Adriano Machado <admachad>
Component: VeleroAssignee: Pranav Gaikwad <pgaikwad>
Status: CLOSED ERRATA QA Contact: Prasad Joshi <prajoshi>
Severity: medium Docs Contact:
Priority: medium    
Version: 1.6.2CC: admachad, dwalsh, ernelson, prajoshi, rjohnson
Target Milestone: ---Flags: pgaikwad: needinfo-
Target Release: 1.6.5   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 2074675 (view as bug list) Environment:
Last Closed: 2022-05-31 09:49:05 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2074675    

Description Adriano Machado 2022-04-06 00:28:01 UTC
Description of problem:
When migrating

Version-Release number of selected component (if applicable): MTC 1.5.2


How reproducible:
Always when migrating HPAs from Openshift 3.x to 4.x


Steps to Reproduce:
1. Migrate a project based on a Openshift 3.x cluster containing a deploymentconfig and a horizontalpodautoscaler that scales this deployment config.  Check that on the destination 4.x cluster, the apiversion/kind of the spec.scaleTargetRef of the migrated HPA is not using the updated GVK of DeploymentConfig on Openshift 4.x

Actual results:
- hpa.spec.scaleTargetRef.apiVersion is "v1"

Expected results:
- hpa.spec.scaleTargetRef.apiVersion should be "apps.openshift.io/v1"

Additional info:

Comment 1 Erik Nelson 2022-04-12 18:54:15 UTC
Unsure exactly where the breakdown here is. It could be something lost in translation during the Velero backup/restore, or a controller is reconciling the value potentially.

Does this break workloads?

Comment 2 Pranav Gaikwad 2022-04-14 16:46:49 UTC
Erik, 

It does break workloads as the HPAs in the target cluster will attempt to find DeploymentConfig resource under Core group. Removing NEEDINFO

Comment 7 Prasad Joshi 2022-05-11 17:23:31 UTC
Verified with MTC 1.6.5

image: registry.redhat.io/rhmtc/openshift-migration-controller-rhel8@sha256:b8d3e08c0e74bf88348e7d46f32abd24747a3fe9c66e7890c48b1d366fd61693


HPA resource(source cluster)

$ oc get hpa -n ocp-mysql -o yaml

 spec:
    maxReplicas: 7
    minReplicas: 1
    scaleTargetRef:
      apiVersion: v1
      kind: DeploymentConfig
      name: mysql
    targetCPUUtilizationPercentage: 75

HPA resource after performing migration(target cluster) 

$ oc get hpa -n oc-mysql -o yaml
  spec:
    maxReplicas: 7
    metrics:
    - resource:
        name: cpu
        target:
          averageUtilization: 80
          type: Utilization
      type: Resource
    minReplicas: 1
    scaleTargetRef:
      apiVersion: apps.openshift.io/v1
      kind: DeploymentConfig
      name: mysql

hpa.spec.scaleTargetRef.apiVersion is "apps.openshift.io/v1". I see the expected behaviour.

Moving this to verified status.

Comment 12 errata-xmlrpc 2022-05-31 09:49:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Migration Toolkit for Containers (MTC) 1.6.5 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:4814