Description of problem: I created a limitrange resource on source project with cpu values only. I was expecting rsync to use the default memory values instead of 0. Version-Release number of selected component (if applicable): Source Cluster : 4.6 GCP Target Cluster : 4.9 GCP MTC 1.7.0 OADP 0.5.5 How reproducible: Always Steps to Reproduce: 1. Create a new project in the source cluster. $ oc new-project test-dvm-limitrange 2. Create a limitrage object apiVersion: v1 kind: LimitRange metadata: name: cpu-min-max-lr Namespace: test-dvm-limitrange spec: limits: - max: cpu: 500m min: cpu: 100m type: Container 3. Deploy an application with pvc in source cluster $ oc new-app django-psql-persistent 4. Create a migplan from ui 4. Execute cutover 5. Check rsync pods resource limits Actual results: Rsync pod is using memory requests and limits as 0 Expected results: Rsync pod should use default values when the limitrange values are nil. Additional info: Added a rsync pod yaml below. $ oc get pods rsync-h7qmv -o yaml image: registry.redhat.io/rhmtc/openshift-migration-rsync-transfer-rhel8@sha256:de52c65c9022c3e310c88f3bb34f306427b5eb2fcde7c0f5b7be2abf6692d57e imagePullPolicy: IfNotPresent name: rsync resources: limits: cpu: 500m memory: "0" requests: cpu: 100m memory: "0"
Verified with MTC 1.7.0 metadata_nvr: openshift-migration-operator-metadata-container-v1.7.0-25 Rsync pod is now using default values when limitrange values are nil. Created a limitrange resource with nil memory values. $ oc get pods rsync-jrkrd -o yaml resources: limits: cpu: 500m memory: 1Gi requests: cpu: 100m memory: 1Gi Created a limitrange resource with nil cpu values $ oc get pods -o yaml rsync-kcvn6 limits: cpu: "1" memory: 1Gi requests: cpu: 100m memory: 1Gi Moving this to verified status
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Migration Toolkit for Containers (MTC) 1.7.0 release advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:1043