Bug 1822366 - cronjob does not work after migration because pvc is not copied
Summary: cronjob does not work after migration because pvc is not copied
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Migration Tooling
Version: 4.3.z
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.4.z
Assignee: Scott Seago
QA Contact: Xin jiang
URL:
Whiteboard:
Depends On: 1831252
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-04-08 20:30 UTC by jooho lee
Modified: 2023-12-15 17:39 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1831252 (view as bug list)
Environment:
Last Closed: 2020-06-17 00:04:13 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2020:2571 0 None None None 2020-06-17 00:04:22 UTC

Description jooho lee 2020-04-08 20:30:59 UTC
Description of problem:

Migration tool does not copy PVC that the cronjob uses so target cronjob does not work with the messages:
~~~
79s         Normal    SuccessfulCreate                 job/refresh-pq-1586368080                  Created pod: refresh-pq-1586368080-p2w8m
<unknown>   Warning   FailedScheduling                 pod/refresh-pq-1586368140-m45g6            persistentvolumeclaim "portal-downloads" not found
~~~

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. create a sample cronjob on source cluster
2. migrate 
3. execute the cronjob 

sample cronjob.
~~~
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "30 3 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from a CronJob
          restartPolicy: OnFailure
          volumeMounts:
            - mountPath: /downlods
              name: downloads
          dnsPolicy: ClusterFirst
          restartPolicy: OnFailure
          schedulerName: default-scheduler
          securityContext: {}
          terminationGracePeriodSeconds: 30
          volumes:
          - name: downloads
            persistentVolumeClaim:
              claimName: portal-downloads
~~~
Actual results:
Cronjob always becomes suspended status.

Expected results:
it should work.

Additional info:

Comment 1 John Matthews 2020-04-08 21:52:58 UTC
Related to:  https://issues.redhat.com/browse/MIG-179

Comment 2 Scott Seago 2020-05-14 14:09:19 UTC
Fixed by https://github.com/konveyor/mig-controller/pull/485

Comment 6 Sergio 2020-06-09 11:01:57 UTC
Verified using CAM 1.2.2 stage

    - name: MIG_CONTROLLER_REPO
      value: openshift-migration-controller-rhel8@sha256
    - name: MIG_CONTROLLER_TAG
      value: 3923f6000eaff8c5f02d778e1d7b93515a8bc23990d54f917c30a108f7a37b3a
    - name: MIG_UI_REPO
      value: openshift-migration-ui-rhel8@sha256
    - name: MIG_UI_TAG
      value: 6abfaea8ac04e3b5bbf9648a3479b420b4baec35201033471020c9cae1fe1e11
    - name: MIGRATION_REGISTRY_REPO
      value: openshift-migration-registry-rhel8@sha256
    - name: MIGRATION_REGISTRY_TAG
      value: ea6301a15277d448c8756881c7e2e712893ca8041c913476640f52da9e76cad9
    - name: VELERO_REPO
      value: openshift-migration-velero-rhel8@sha256
    - name: VELERO_TAG
      value: 1a33e327dd610f0eebaaeae5b3c9b4170ab5db572b01a170be35b9ce946c0281
    - name: VELERO_PLUGIN_REPO
      value: openshift-migration-plugin-rhel8@sha256
    - name: VELERO_PLUGIN_TAG
      value: 37d5167cbbeedcedaf6750d64ba992a75d3ae21f3d3df6c0c6eef6eb400dd076

The "portal-download" pvc and the sample cronjob were migrated properly, and cronjob worked fine in target cluster.

Comment 8 errata-xmlrpc 2020-06-17 00:04:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:2571


Note You need to log in before you can comment on or make changes to this bug.