Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 2064336

Summary: Get logs for a VM in a cold/warm migration plan from RHV doesn't collect the VM conversion log
Product: Migration Toolkit for Virtualization Reporter: Ilanit Stein <istein>
Component: Must-GatherAssignee: Marek Aufart <maufart>
Status: CLOSED ERRATA QA Contact: Maayan Hadasi <mguetta>
Severity: high Docs Contact: Richard Hoch <rhoch>
Priority: unspecified    
Version: 2.3.0CC: amastbau, fbladilo, jortel, maufart, mguetta, slucidi
Target Milestone: ---Flags: istein: needinfo+
Target Release: 2.3.2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-07-21 13:48:39 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Ilanit Stein 2022-03-15 15:12:14 UTC
Description of problem:
For a cold migration plan of 20 VMs downloading logs for a VM that passed migration, and that it's conversion log exists and has a full content,
the collected conversion log contains only:

<domain type="kvm"><name>auto-rhv-red-iscsi-migration-50gb-70usage-vm-18</name><memory>2147483648</memory><os><type>hvm</type><boot dev="hd"></boot></os><cpu><topology sockets="1" cores="1"></topology></cpu><devices><disk type="block" device="disk"><driver name="qemu" type="raw"></driver><source dev="/dev/block0"></source><target dev="hda" bus="virtio"></target></disk></devices></domain>virt-v2v: virt-v2v 1.42.0rhel=8,release=16.module+el8.5.0+13900+a08c0464 (x86_64)
<domain type="kvm"><name>auto-rhv-red-iscsi-migration-50gb-70usage-vm-18</name><memory>2147483648</memory><os><type>hvm</type><boot dev="hd"></boot></os><cpu><topology sockets="1" cores="1"></topology></cpu><devices><disk type="block" device="disk"><driver name="qemu" type="raw"></driver><source dev="/dev/block0"></source><target dev="hda" bus="virtio"></target></disk></devices></domain>
    source name: auto-rhv-red-iscsi-migration-50gb-70usage-vm-18
/var/tmp/auto-rhv-red-iscsi-migration-50gb-70usage-vm-18-sda.qcow2 

Additional info:
Same result happen when running Get logs for a VM that it's conversion pod no longer exists.

This was run on a Bare metal cluster (cloud10).

Comment 3 Marek Aufart 2022-03-15 15:23:02 UTC
Hey Illanit, would it be possible get forklift-must-gather-api pod logs? (and/or access to the env)

Comment 6 Marek Aufart 2022-06-15 14:17:13 UTC
I cannot reproduce on my env nor on provided mig03 environment. Re-checked the source code and the conversion pod lookup uses vmID (ID within the migration Plan), not the VM name, so automated VMs naming in scalelab should not affect it. Would it be possible get access to the env sometimes when scale testing will be up in future? Thanks

Comment 7 Marek Aufart 2022-06-22 11:12:43 UTC
Hi @istein, thanks for providing the scale env, I spent some time there in last few days. I was able see errors on VM those VMs or pods were already deleted (error on getting logs was displayed in UI), but it worked well on existing VMs (not yet deleted). A case with sucessful must-gather get logs execution, but just limited content of conversion pod logs was _not reproduced_ unfortunately. (used the default must-gather image from the MTV deployment - registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8@sha256:6a55694d8588f554f7320525475ea00b1fe62fbac450b9c3b720ac9596131340)

Comment 8 Maayan Hadasi 2022-07-06 09:36:18 UTC
The issue is reproducible with MTV 2.3.2-7 / iib:261342
Please see attached Plan logs folder: mtv_2.3.2-7_must-gather-plan mtv-api-tests-22-06-07-08-47-05-24a-plan.tar.gz

Comment 10 Maayan Hadasi 2022-07-06 12:23:35 UTC
Hi,
Updating that the issue is reproducible with a single VM migration from RHV on PSI/BM, when using *openshift-mtv* as the target namespace

Comment 11 Sam Lucidi 2022-07-07 19:32:46 UTC
It appears that this issue occurs when the name of the virt-v2v pod is too long to contain the entire plan name and VM id, and is truncated. The targeted collection script finds the virt-v2v pod by selecting on the vmID label, but then additionally uses grep to search for the VM id in the pod name. If the pod name is too long, the grep finds nothing and the pod log is not collected. In the event that the virt-v2v pod is in the operator namespace, the general gather script will collect a filtered portion of the virt-v2v pod log as part of its attempt to collect all of the operator pod logs. This results in it looking like the virt-v2v log was collected, but the log is only a fragment.

The grep for the VM id in the virt-v2v pod name seems redundant with the selector on the vmID label, and removing it appears to solve the problem.

https://github.com/konveyor/forklift-must-gather/pull/52

Comment 12 Amos Mastbaum 2022-07-17 14:43:52 UTC
run a plan with maximum name target operator namesapce
virt-v2v logs are included as expected.

Comment 13 Amos Mastbaum 2022-07-17 15:03:37 UTC
and contain to the expected content

Comment 14 Amos Mastbaum 2022-07-17 15:04:25 UTC
4.10.22	STAGE (4.10.3-22)	2.3.2-11

Comment 17 errata-xmlrpc 2022-07-21 13:48:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (MTV 2.3.2 Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:5679