Description of problem: App migration cannot migrate cronjobs using internal images Version-Release number of selected component (if applicable): OCP4 oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.1.0 True False 32m Cluster version is 4.1.0 OCP3 $ oc version oc v3.11.126 kubernetes v1.11.0+d4cacc0 features: Basic-Auth GSSAPI Kerberos SPNEGO Server https://XXXXXXXXX openshift v3.11.104 kubernetes v1.11.0+d4cacc0 Controller: image: quay.io/ocpmigrate/mig-controller:stable imageID: quay.io/ocpmigrate/mig-controller@sha256:7ec48a557240f1d2fa6ee6cd62234b0e75f178eca2a0cc5b95124e01bcd2c114 Velero: image: quay.io/ocpmigrate/velero:stable imageID: quay.io/ocpmigrate/velero@sha256:957725dec5f0fb6a46dee78bd49de9ec4ab66903eabb4561b62ad8f4ad9e6f05 image: quay.io/ocpmigrate/migration-plugin:stable imageID: quay.io/ocpmigrate/migration-plugin@sha256:b4493d826260eb1e3e02ba935aaedfd5310fefefb461ca7dcd9a5d55d4aa8f35 How reproducible: always Steps to Reproduce: 1. oc new-project cronjob-test 2. oc import-image intalpine:int --from=docker.io/alpine:latest --confirm 3. Create the cronjob resource using this internal image apiVersion: batch/v1beta1 kind: CronJob metadata: name: testcron spec: schedule: "*/1 * * * *" jobTemplate: spec: template: spec: containers: - name: cronpod image: docker-registry.default.svc:5000/cronjob-test/intalpine:int args: - /bin/sh - -c - echo "Hello!" restartPolicy: OnFailure 4. Migrate the cronjob-test project Actual results: Pods scheduled by the migrated cronjob cannot be created because they cannot find the image when pulling. Expected results: The pods created by the migrated cronjob should run without any problem. Additional info:
This has been fixed. The relevant PRs are here: https://github.com/fusor/openshift-migration-plugin/pull/21 https://github.com/fusor/openshift-velero-plugin/pull/4
Verified in Controller: image: quay.io/ocpmigrate/mig-controller:latest imageID: quay.io/ocpmigrate/mig-controller@sha256:259b08d197940932c616dd45f7cfd9799aca6823e83a510f85c83c0c5368496c Velero: image: quay.io/ocpmigrate/velero:latest imageID: quay.io/ocpmigrate/velero@sha256:33d0e627aea00d0896a25d0acae6d4aa7deaaf86ddd28c29f8a6020dc16a97fc image: quay.io/ocpmigrate/migration-plugin:latest imageID: quay.io/ocpmigrate/migration-plugin@sha256:68f0791ce3d51e16e9759465064067d90daba396339ad83aa7aa6eba5a3bd4cf OCP4: NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.2.0-0.nightly-2019-09-08-232045 True False 6h41m Cluster version is 4.2.0-0.nightly-2019-09-08-232045 OCP3: oc version oc v3.9.97 kubernetes v1.9.1+a0ce1bc657 Cron jobs are now suspended when quiesce oc get cronjob NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE hellocron */1 * * * * True 0 1m 7m
Previous comment is wrong. Please, ignore it. Verified In: Controller: image: quay.io/ocpmigrate/mig-controller:latest imageID: quay.io/ocpmigrate/mig-controller@sha256:259b08d197940932c616dd45f7cfd9799aca6823e83a510f85c83c0c5368496c Velero: image: quay.io/ocpmigrate/velero:latest imageID: quay.io/ocpmigrate/velero@sha256:33d0e627aea00d0896a25d0acae6d4aa7deaaf86ddd28c29f8a6020dc16a97fc image: quay.io/ocpmigrate/migration-plugin:latest imageID: quay.io/ocpmigrate/migration-plugin@sha256:68f0791ce3d51e16e9759465064067d90daba396339ad83aa7aa6eba5a3bd4cf OCP4: NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.2.0-0.nightly-2019-09-08-232045 True False 6h41m Cluster version is 4.2.0-0.nightly-2019-09-08-232045 oc v3.11.144 kubernetes v1.11.0+d4cacc0 Crobjobs are able to manage internal images now: OCP3 $ oc describe cronjob internal-img | grep Image Image: docker-registry.default.svc:5000/cronjob-test/intalpine:int OCP4 after migration $ oc describe cronjob internal-img | grep Image Image: image-registry.openshift-image-registry.svc:5000/cronjob-test/intalpine:int
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922