Created attachment 1880920 [details] Imported log pod Created attachment 1880920 [details] Imported log pod Created attachment 1880920 [details] Imported log pod Created attachment 1880920 [details] Imported log pod Description of problem: Warm migration from RHV fails if a RHEL8 VM has 2 disks, both when target storage is CEPH and NFS. Incremental copy is created successfully, but after starting cutover - process fails Version-Release number of selected component (if applicable): OCP-4.11.0 CNV-4.11.0 MTV 2.3.1 How reproducible: 100% Steps to Reproduce: 1. Create new warm plan 2. Select VM with 2 disks 3. Select target storage as CEPH or NFS Important: Source VM should run during migration process, otherwise bug will not reproduce! Actual results: 1. Migration plan is stuck in progress 2. CNV UI shows that VM has Data volume error 3. Importer pod is in CrashLoopBack state, see the log attached Expected results: VM should be migrated as usual Additional info: 1. Warm migration of a VM with 2 disks from VMware PASSED for both NFS and Ceph target. 2. Warm migration of a VM with 1 disk from RHV PASSED for both NFS and Ceph target 3. Cold migration of a VM with 2 disks from RHV PASSED for Ceph target
Matthew Arnold: "This bug was always there, it's just kind of unusual to hit. It happens when a snapshot's Actual Size is bigger than its Virtual Size Work around - It looks like expanding the source disk should work, it's kind of an annoying workaround though. But as long as the Virtual Size is bigger than the Actual Size, you shouldn't hit this bug. Also, it might also help to create the second disk starting at something bigger than 1GB."
Verified this bug on MTV 2.3.2-7/iib:261342. VM migration was successful and VM looks good after migration was completed
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (MTV 2.3.2 Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:5679