+++ This bug was initially created as a clone of Bug #1929853 +++ Description of problem: Raised during a lab this morning and reproducible: I'm a user with gluster backed ocp3.x source, and a ceph backed ocp4.x cluster. If I migrate the "file-uploader" demo application from the source to the target, and choose ceph as my target storage class, the pvc that is ultimately created on the target is still configured with gluster, so it breaks. File uploader app: https://github.com/konveyor/mig-demo-apps/tree/master/apps/file-uploader Example MigPlan: https://gist.github.com/f9524498956a61549f1ff085e718a491 Example resulting PVC: https://gist.github.com/bee208cd8a552e94718835695d445d66 Version-Release number of selected component (if applicable): 1.4.0 How reproducible: Not able to conclusively say every time, but 2 students hit this and John was able to reproduce immediately. Steps to Reproduce: 1. Start with gluster backed ocp3.x source, ceph backed ocp4.x cluster. Run file-uploader app from above on ocp3 side. 2. Create a migplan and choose ceph for target storageclass. 3. Migrate, will result in a pvc that is using "gluster" as the storage class on the ocp4 side, which doesn't have gluster available to it. --- Additional comment from Erik Nelson on 2021-02-18 14:24:11 UTC --- Confirmed I'm able to reproduce this, and the problem is not isolated to a particular target storageclass. I reproduced with glusterfs -> gp2. The target PVC came up with glusterfs as its storageclass despite correctly choosing gp2 as the dest storageclass. --- Additional comment from Erik Nelson on 2021-02-19 17:18:17 UTC --- Cherry picked to 1.4.1 for initial release and is also applied to 1.4.2
Verified it on 3.11->4.7 with gluterfs -> gp2 and glusterfs->ocs4
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Migration Toolkit for Containers (MTC) 1.4.1 Operator metadata image advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:0605