Bug 1929853 - Running an indirect migration with a storageclass conversion from gluster->ceph will still provision gluster on the target
Summary: Running an indirect migration with a storageclass conversion from gluster->ce...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Migration Toolkit for Containers
Classification: Red Hat
Component: General
Version: 1.4.0
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 1.4.1
Assignee: Scott Seago
QA Contact: Xin jiang
Avital Pinnick
URL:
Whiteboard:
Depends On:
Blocks: 1930885
TreeView+ depends on / blocked
 
Reported: 2021-02-17 19:07 UTC by Erik Nelson
Modified: 2021-11-17 15:32 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1930885 (view as bug list)
Environment:
Last Closed: 2021-02-23 14:29:15 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github konveyor mig-controller pull 963 0 None open Bug 1929853: Don't exit annotatePVs prematurely 2021-02-18 20:34:41 UTC
Red Hat Product Errata RHBA-2021:0604 0 None None None 2021-02-23 14:29:21 UTC

Description Erik Nelson 2021-02-17 19:07:48 UTC
Description of problem:
Raised during a lab this morning and reproducible: I'm a user with gluster backed ocp3.x source, and a ceph backed ocp4.x cluster. If I migrate the "file-uploader" demo application from the source to the target, and choose ceph as my target storage class, the pvc that is ultimately created on the target is still configured with gluster, so it breaks.

File uploader app: https://github.com/konveyor/mig-demo-apps/tree/master/apps/file-uploader

Example MigPlan: https://gist.github.com/f9524498956a61549f1ff085e718a491
Example resulting PVC: https://gist.github.com/bee208cd8a552e94718835695d445d66

Version-Release number of selected component (if applicable):
1.4.0

How reproducible:
Not able to conclusively say every time, but 2 students hit this and John was able to reproduce immediately.

Steps to Reproduce:
1. Start with gluster backed ocp3.x source, ceph backed ocp4.x cluster. Run file-uploader app from above on ocp3 side.
2. Create a migplan and choose ceph for target storageclass.
3. Migrate, will result in a pvc that is using "gluster" as the storage class on the ocp4 side, which doesn't have gluster available to it.

Comment 1 Erik Nelson 2021-02-18 14:24:11 UTC
Confirmed I'm able to reproduce this, and the problem is not isolated to a particular target storageclass. I reproduced with glusterfs -> gp2. The target PVC came up with glusterfs as its storageclass despite correctly choosing gp2 as the dest storageclass.

Comment 2 Erik Nelson 2021-02-19 17:18:17 UTC
Cherry picked to 1.4.1 for initial release and is also applied to 1.4.2

Comment 8 errata-xmlrpc 2021-02-23 14:29:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Migration Toolkit for Containers (MTC) tool image release advisory 1.4.1), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:0604


Note You need to log in before you can comment on or make changes to this bug.