RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1541908 - [RFE] Support v2v CPU topology
Summary: [RFE] Support v2v CPU topology
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libguestfs
Version: 7.4
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: rc
: ---
Assignee: Richard W.M. Jones
QA Contact: Virtualization Bugs
Jiri Herrmann
URL:
Whiteboard: V2V
Depends On: 1568148
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-02-05 07:39 UTC by Liran Rotenberg
Modified: 2018-10-30 07:47 UTC (History)
7 users (show)

Fixed In Version: libguestfs-1.38.2-5.el7
Doc Type: Release Note
Doc Text:
*virt-v2v* converts virtual machine CPU topology With this update, the *virt-v2v* utility preserves the CPU topology of the converted virtual machines (VMs). This ensures that the VM CPU works the same way after the conversion as it did before the conversion, which avoids potential runtime problems.
Clone Of:
Environment:
Last Closed: 2018-10-30 07:45:24 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
scenario3-3.vmx (2.89 KB, text/plain)
2018-06-26 10:29 UTC, mxie@redhat.com
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:3021 0 None None None 2018-10-30 07:47:20 UTC

Description Liran Rotenberg 2018-02-05 07:39:28 UTC
Description of problem:
Support v2v CPU topology.
Currently using v2v for a VMWare VM with 8 vcpu - 4 sockets and 2 cores per socket gets a VM with 8 vcpu - 1 socket and 1 core per socket(8:1:1).


Version-Release number of selected component (if applicable):
libguestfs-1.36.10-4.el7.x86_64
kernel-3.10.0-830.el7.x86_64
virt-v2v-1.36.10-4.el7.x86_64
vdsm-4.20.17-1.el7ev.x86_64
ovirt-engine-4.2.1.5-0.1.el7.noarch

Steps to Reproduce:
1. Create a VM in VMWare with the above settings.
2. Use v2v to import the VM(Using RHEV UI).

Actual results:
CPU topology isn't supported getting (8:1:1).

Expected results:
CPU topology set to the settings set in VMWare.

Additional info:
Commit to refer:
https://github.com/libguestfs/libguestfs/commit/7f940c3e3a8de21c27e517b4ccde02fa7e7b287f

Comment 2 Richard W.M. Jones 2018-02-05 08:56:17 UTC
virt-v2v already supports CPU topology, see the commit you mentioned.
I guess you mean that your target (RHV 4.2?) isn't preserving the CPU
topology, and the reason for that is likely that RHV isn't reading the
metadata that we're already supplying.

Please run virt-v2v with the -v and -x options and collect the complete
logs, or else read this link and provide the necessary file:

http://libguestfs.org/virt-v2v.1.html#debugging-rhv-m-import-failures

Comment 4 Richard W.M. Jones 2018-02-06 15:44:25 UTC
So you're right, this feature needs to be backported to RHEL 7.  I'm
going to dev-ack this accordingly and ask product management to
take a look.

Comment 7 Richard W.M. Jones 2018-02-06 17:01:09 UTC
To be clear you get 8 vCPUs, but it doesn't preserve the topology.
In the description the topology was 4:2:1 but the resulting VM
was 8:1:1.

For the VMware to RHV case, if topology support was backported
to virt-v2v, then:

* Input from OVA we can only get cores per socket (not hyperthreads).
  Also CPU vendor and model cannot be read.

* Input from VMX can also only get cores per socket (not hyperthreads),
  not CPU vendor nor model.  It looks like a limitation in VMware.

* Input from vCenter over HTTPS: unclear.

* Output to RHV would create the required metadata
(<rasd:num_of_sockets> etc) for topology.  But the CPU vendor
and model cannot be passed to RHV.

Comment 10 mxie@redhat.com 2018-05-11 10:38:05 UTC
Test the bug with below builds:
virt-v2v-1.38.1-1.el7.x86_64
libguestfs-1.38.1-1.el7.x86_64
libvirt-3.9.0-14.el7_5.5.x86_64
qemu-kvm-rhev-2.10.0-21.el7_5.2.x86_64


Steps:
Scenario1:input guest from vCenter over HTTPS
1.1 Using virsh to check cpu topology,but libvirt only could get cpu numbers due to bug1568148
# virsh -c vpx://root.73.141/data/10.73.196.89/?no_verify=1
Enter root's password for 10.73.73.141: 
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # dumpxml esx6.5-rhel7.5-x86_64
....
  <vcpu placement='static'>8</vcpu>
  <cputune>
    <shares>8000</shares>
  </cputune>
....

1.2 Use virt-v2v to convert the guest from vmware to rhv using -i libvirt -ic vpx 
# virt-v2v -i libvirt -ic vpx://root.199.71/data/10.73.196.89/?no_verify=1 esx6.5-rhel7.5-x86_64 --password-file /tmp/passwd -o rhv -os 10.66.144.40:/home/nfs_export -on esx6.5-rhel7.5-x86_64-vpx

1.3 After finishing conversion, check guest's ovf at export domain,find v2v can't parse guest's cpu topology in ovf correctly  
# cat 5a526b19-3cb8-4e1a-8b48-f3d130183470/5a526b19-3cb8-4e1a-8b48-f3d130183470.ovf 
....
   <Item>
        <rasd:Caption>8 virtual cpu</rasd:Caption>
        <rasd:Description>Number of virtual CPU</rasd:Description>
        <rasd:InstanceId>1</rasd:InstanceId>
        <rasd:ResourceType>3</rasd:ResourceType>
        <rasd:num_of_sockets>1</rasd:num_of_sockets>
        <rasd:cpu_per_socket>8</rasd:cpu_per_socket>
      </Item>
....


Scenario2:input guest from OVA 
2.1 Export esx6.5-rhel7.5-x86_64 guest as a ova file and check cpu info in ova, we could see cpu total num and cores num per socket
# cat esx6.5-rhel7.5-x86_64/esx6_5-rhel7.5-x86_64.ovf
....
     <Item>
        <rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
        <rasd:Description>Number of Virtual CPUs</rasd:Description>
        <rasd:ElementName>8 virtual CPU(s)</rasd:ElementName>
        <rasd:InstanceID>1</rasd:InstanceID>
        <rasd:ResourceType>3</rasd:ResourceType>
        <rasd:VirtualQuantity>8</rasd:VirtualQuantity>
        <vmw:CoresPerSocket ovf:required="false">2</vmw:CoresPerSocket>
      </Item>
....

2.2 Convert guest to rhv by virt-v2v
# virt-v2v -i ova esx6.5-rhel7.5-x86_64 -o rhv -os 10.66.144.40:/home/nfs_export -of qcow2 -on esx6.5-rhel7.5-x86_64-ova

2.3 After finishing conversion, check guest's ovf at export domain, v2v can parse guest's cpu topology in ovf correctly  
# cat 488c56f9-cd39-40aa-9799-e55486f604c5/488c56f9-cd39-40aa-9799-e55486f604c5.ovf 
....
 <Item>
        <rasd:Caption>8 virtual cpu</rasd:Caption>
        <rasd:Description>Number of virtual CPU</rasd:Description>
        <rasd:InstanceId>1</rasd:InstanceId>
        <rasd:ResourceType>3</rasd:ResourceType>
        <rasd:num_of_sockets>4</rasd:num_of_sockets>
        <rasd:cpu_per_socket>2</rasd:cpu_per_socket>
        <rasd:threads_per_cpu>1</rasd:threads_per_cpu>
      </Item>
...
2.4 Import guest to data domain, guest's CPU shows 8 (4:2:1) in general info which is correct

2.5 Use virt-v2v to convert guest to libvirt, and check cpu info in guest xml after conversion,v2v could parse guest's cpu topology in guest xml correctly  
# virsh dumpxml esx6.5-rhel7.5-x86_64-ova
....
   <cpu>
    <topology sockets='4' cores='2' threads='1'/>
  </cpu>
....

Scenario3: input guest from VMX
3.1 Check guest cpu topology in vmx file, and we could find cpu total numbers and cores number per Socket
# cat esx6.5-rhel7.5-x86_64/esx6.5-rhel7.5-x86_64.vmx 
....
numvcpus = "8"
cpuid.coresPerSocket = "2"
...

3.2 Use virt-v2v to convert guest to rhv using -i vmx
# virt-v2v -i vmx esx6.5-rhel7.5-x86_64/esx6.5-rhel7.5-x86_64.vmx -o rhv -os 10.66.144.40:/home/nfs_export -of qcow2

3.3 After finishing conversion, check guest's ovf at export domain,v2v could parse guest's cpu topology in ovf correctly  
# cat 187a2a99-ea8f-49d9-9159-0bd99715292d/187a2a99-ea8f-49d9-9159-0bd99715292d.ovf 
....
     <Item>
        <rasd:Caption>8 virtual cpu</rasd:Caption>
        <rasd:Description>Number of virtual CPU</rasd:Description>
        <rasd:InstanceId>1</rasd:InstanceId>
        <rasd:ResourceType>3</rasd:ResourceType>
        <rasd:num_of_sockets>4</rasd:num_of_sockets>
        <rasd:cpu_per_socket>2</rasd:cpu_per_socket>
        <rasd:threads_per_cpu>1</rasd:threads_per_cpu>
      </Item>
...

3.4 Import guest to data domain, guest's CPU shows 8 (4:2:1) in general info which is correct

3.5 Use virt-v2v to convert guest to libvirt, and check cpu info in guest xml after conversion,v2v could parse guest's cpu topology in xml correctly  
# virsh dumpxml esx6.5-rhel7.5-x86_64
....
   <cpu>
    <topology sockets='4' cores='2' threads='1'/>
  </cpu>
....


Hi Pino,
   According to above test result, v2v could parse guest's cpu topology when convert guest from vmx and ova, but still can't parse guest's cpu topology correctly when convert guest from vmware over HTTPS

Comment 11 Pino Toscano 2018-05-11 12:13:56 UTC
(In reply to mxie from comment #10)
>    According to above test result, v2v could parse guest's cpu topology when
> convert guest from vmx and ova, but still can't parse guest's cpu topology
> correctly when convert guest from vmware over HTTPS

For the conversion over https via vCenter, you will need libvirt with the fix for bug 1568148.

Comment 13 mxie@redhat.com 2018-06-26 07:04:49 UTC
Verify the bug with below builds:
virt-v2v-1.38.2-5.el7.x86_64
libguestfs-1.38.2-5.el7.x86_64
libvirt-4.4.0-2.el7.x86_64
qemu-kvm-rhev-2.12.0-4.el7.x86_64

Steps:

Scenario1:convert guest from vCenter over HTTPS

1.1 Original guest has multiple sockets and mutiple core in CPU topology
# virsh -c vpx://root.73.141/data/10.73.75.219/?no_verify=1
Enter root's password for 10.73.73.141:
....
virsh # dumpxml esx6.7-rhel7.5-x86_64
....
  <cpu>
    <topology sockets='4' cores='2' threads='1'/>
  </cpu>
....

1.1.1 Use virt-v2v to convert the guest from vmware to rhv using -i libvirt -ic vpx
# virt-v2v -i libvirt -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel7.5-x86_64 --password-file /tmp/passwd -o rhv -os 10.66.144.40:/home/nfs_export -b ovirtmgmt

1.1.2 After finishing conversion, check guest's ovf at export domain, v2v can parse guest's cpu topology in ovf correctly  
# cat 9fd66219-b1ec-42f9-88d6-c545b53d53a4.ovf
....
    <Item>
        <rasd:Caption>8 virtual cpu</rasd:Caption>
        <rasdescription>Number of virtual CPU</rasdescription>
        <rasd:InstanceId>1</rasd:InstanceId>
        <rasd:ResourceType>3</rasd:ResourceType>
        <rasd:num_of_sockets>4</rasd:num_of_sockets>
        <rasd:cpu_per_socket>2</rasd:cpu_per_socket>
        <rasd:threads_per_cpu>1</rasd:threads_per_cpu>
      </Item>
....

1.1.3 Import guest to data domain, guest's CPU shows 8 (4:2:1) in general info which is correct

*************************

1.2  Original guest has 1 sockets and multiple cores in CPU topology
# virsh -c vpx://root.73.141/data/10.73.75.219/?no_verify=1
Enter root's password for 10.73.73.141:
....
virsh # dumpxml esx6.7-rhel7.5-x86_64
....
   <cpu>
    <topology sockets='1' cores='4' threads='1'/>
  </cpu>
....

1.2.1 Use virt-v2v to convert the guest from vmware to libvirt using -i libvirt -ic vpx
# virt-v2v -i libvirt -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel7.5-x86_64 --password-file /tmp/passwd -on vpx-1-socket

1.2.2 After finishing conversion, check guest's xml, v2v can parse guest's cpu topology correctly  
# virsh dumpxml vpx-1-socket
....
  <cpu>
    <topology sockets='1' cores='4' threads='1'/>
  </cpu>
....
*************************

1.3  Original guest has multiple sockets and 1 core in CPU topology
1.3.1 libvirt can not get CPU topology from vmware guest due to bug 1590079

**************************

1.4  Original guest has singular cpu num (>1) in CPU topology
1.4.1 libvirt can not get CPU topology from vmware guest due to bug 1584091

___________________________________________________________________________

Scenario2:convert guest from OVA

2.1 Original guest has multiple sockets and mutiple cores in CPU topology
# cat esx6.5-rhel7.5-x86_64/esx6_5-rhel7.5-x86_64.ovf
....
     <Item>
        <rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
        <rasdescription>Number of Virtual CPUs</rasdescription>
        <rasd:ElementName>8 virtual CPU(s)</rasd:ElementName>
        <rasd:InstanceID>1</rasd:InstanceID>
        <rasd:ResourceType>3</rasd:ResourceType>
        <rasd:VirtualQuantity>8</rasd:VirtualQuantity>
        <vmw:CoresPerSocket ovf:required="false">2</vmw:CoresPerSocket>
      </Item>
....
2.1.1 Convert guest to rhv by virt-v2v
# virt-v2v -i ova esx6.5-rhel7.5-x86_64 -o rhv -os 10.66.144.40:/home/nfs_export -of qcow2 -on esx6.5-rhel7.5-x86_64-ova

2.1.2 After finishing conversion, check guest's ovf at export domain, v2v can parse guest's cpu topology in ovf correctly  
# cat 5b3751a4-8962-4f57-b13b-a58e5845fa8b/5b3751a4-8962-4f57-b13b-a58e5845fa8b.ovf
....
     <Item>
        <rasd:Caption>8 virtual cpu</rasd:Caption>
        <rasdescription>Number of virtual CPU</rasdescription>
        <rasd:InstanceId>1</rasd:InstanceId>
        <rasd:ResourceType>3</rasd:ResourceType>
        <rasd:num_of_sockets>4</rasd:num_of_sockets>
        <rasd:cpu_per_socket>2</rasd:cpu_per_socket>
        <rasd:threads_per_cpu>1</rasd:threads_per_cpu>
      </Item>
...
2.1.3 Import guest to data domain, guest's CPU shows 8 (4:2:1) in general info which is correct

********************************

2.2  Original guest has 1 sockets and multiple cores in CPU topology
#cat esx6.5-rhel7.5-x86_64/esx6_5-rhel7.5-x86_64.ovf
....
    <Item>
        <rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
        <rasdescription>Number of Virtual CPUs</rasdescription>
        <rasd:ElementName>9 virtual CPU(s)</rasd:ElementName>
        <rasd:InstanceID>1</rasd:InstanceID>
        <rasd:ResourceType>3</rasd:ResourceType>
        <rasd:VirtualQuantity>9</rasd:VirtualQuantity>
        <vmw:CoresPerSocket ovf:required="false">9</vmw:CoresPerSocket>
      </Item>
...
2.2.1 Use virt-v2v to convert the ova, the conversion could be finished without error/warning
# virt-v2v -i ova esx6.5-rhel7.5-x86_64 -on ova-1-socket -of qcow2


2.2.2 Check guest's xml after converting, v2v can parse guest's cpu topology in xml correctly
# virsh dumpxml ova-1-socket
....
  <cpu>
    <topology sockets='1' cores='9' threads='1'/>
  </cpu>
....

*********************************

2.3  Original guest has multiple sockets and 1 core in CPU topology
#cat esx6.5-rhel7.5-x86_64/esx6_5-rhel7.5-x86_64.ovf
....
     <Item>
        <rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
        <rasdescription>Number of Virtual CPUs</rasdescription>
        <rasd:ElementName>9 virtual CPU(s)</rasd:ElementName>
        <rasd:InstanceID>1</rasd:InstanceID>
        <rasd:ResourceType>3</rasd:ResourceType>
        <rasd:VirtualQuantity>9</rasd:VirtualQuantity>
        <vmw:CoresPerSocket ovf:required="false">1</vmw:CoresPerSocket>
      </Item>
....
2.3.1 Use virt-v2v to convert the ova, the conversion could be finished without error/warning
# virt-v2v -i ova esx6.5-rhel7.5-x86_64 -on ova-1-core -of qcow2

2.3.2 Check guest's xml after converting, v2v can parse guest's cpu topology in xml correctly
# virsh dumpxml ova-1-core
....
  <cpu>
    <topology sockets='9' cores='1' threads='1'/>
  </cpu>
....

**********************************

2.4  Original guest has singular cpu num (>1) in CPU topology
#cat esx6.5-rhel7.5-x86_64/esx6_5-rhel7.5-x86_64.ovf
....
    <Item>
        <rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
        <rasdescription>Number of Virtual CPUs</rasdescription>
        <rasd:ElementName>9 virtual CPU(s)</rasd:ElementName>
        <rasd:InstanceID>1</rasd:InstanceID>
        <rasd:ResourceType>3</rasd:ResourceType>
        <rasd:VirtualQuantity>9</rasd:VirtualQuantity>
        <vmw:CoresPerSocket ovf:required="false">3</vmw:CoresPerSocket>
      </Item>
....
2.4.1 Use virt-v2v to convert the ova, the conversion could be finished without error/warning
# virt-v2v -i ova esx6.5-rhel7.5-x86_64 -on ova-9cpu -of qcow2

2.4.2 Check guest's xml after converting, v2v can parse guest's cpu topology in xml correctly
# virsh dumpxml ova-9cpu
....
  <cpu>
    <topology sockets='3' cores='3' threads='1'/>
  </cpu>
....

_____________________________________________________________________________

Scenario3: convert guest from VMX

3.1 Original guest has multiple sockets and mutiple cores in CPU topology
# cat esx6.7-rhel7.5-x86_64/esx6.7-rhel7.5-x86_64.vmx
....
cpuid.coresPerSocket = "2"
numvcpus = "8"
...

3.1.1 Use virt-v2v to convert guest from vmx to rhv using -i vmx
# virt-v2v -i vmx  esx6.7-rhel7.5-x86_64/esx6.7-rhel7.5-x86_64.vmx -o rhv -os 10.66.144.40:/home/nfs_export -of qcow2 -b ovirtmgmt -on esx6.7-rhel7.5-x86_64-vmx

3.1.2 After finishing conversion, check guest's ovf at export domain,v2v could parse guest's cpu topology in ovf correctly  
# cat ed3314b1-9641-4095-8d2c-c27a6d0db279/ed3314b1-9641-4095-8d2c-c27a6d0db279.ovf
ea8f-49d9-9159-0bd99715292d.ovf
....
    <Item>
        <rasd:Caption>8 virtual cpu</rasd:Caption>
        <rasdescription>Number of virtual CPU</rasdescription>
        <rasd:InstanceId>1</rasd:InstanceId>
        <rasd:ResourceType>3</rasd:ResourceType>
        <rasd:num_of_sockets>4</rasd:num_of_sockets>
        <rasd:cpu_per_socket>2</rasd:cpu_per_socket>
        <rasd:threads_per_cpu>1</rasd:threads_per_cpu>
      </Item>
...

3.1.3 Import guest to data domain, guest's CPU shows 8 (4:2:1) in general info which is correct

**********************************

3.2  Original guest has 1 sockets and multiple cores in CPU topology
# cat esx6.7-rhel6.9-x86_64/esx6.7-rhel6.9-x86_64.vmx
....
numvcpus = "6"
cpuid.coresPerSocket = "6"
...
3.2.1 Use virt-v2v to convert guest from vmx, the conversion could be finished without error/warning
# virt-v2v -i vmx esx6.7-rhel6.9-x86_64/esx6.7-rhel6.9-x86_64.vmx -on vmx-1-socket

3.2.2 Check guest's xml after converting, v2v can parse guest's cpu topology in xml correctly
# virsh dumpxml vmx-1-socket
....
 <cpu>
    <topology sockets='1' cores='6' threads='1'/>
  </cpu>
....

***********************************

3.3  Original guest has multiple sockets and 1 core in CPU topology
# cat esx6.7-rhel7.5-x86_64/esx6.7-rhel7.5-x86_64.vmx
....
numvcpus = "4"
....

3.3.1 Use virt-v2v to convert guest from vmx to libvirt , the conversion could be finished without error/warning
# virt-v2v -i vmx  esx6.7-rhel7.5-x86_64/esx6.7-rhel7.5-x86_64.vmx -on vmx-1-core -of qcow2

3.3.2 Check guest's xml after converting, v2v CAN NOT parse guest's cpu topology in libvirt xml correctly
# virsh dumpxml vmx-1-core
....
<vcpu placement='static'>4</vcpu>
....

3.3.3  Use virt-v2v to convert guest from vmx to rhv4.2 , the conversion could be finished without error/warning
# virt-v2v -i vmx  esx6.7-rhel7.5-x86_64/esx6.7-rhel7.5-x86_64.vmx -on vmx-1-core -of qcow2 -o rhv -os 10.66.144.40:/home/nfs_export

3.3.4 Check guest's ovf in export domain, v2v can parse guest's cpu topology in guest ovf correctly
# cat 4b2d5dd2-9296-4803-83ec-2c7149817165.ovf 
....
   <Item>
        <rasd:Caption>4 virtual cpu</rasd:Caption>
        <rasd:Description>Number of virtual CPU</rasd:Description>
        <rasd:InstanceId>1</rasd:InstanceId>
        <rasd:ResourceType>3</rasd:ResourceType>
        <rasd:num_of_sockets>1</rasd:num_of_sockets>
        <rasd:cpu_per_socket>4</rasd:cpu_per_socket>
      </Item>
....

************************************

3.4  Original guest has singular cpu num (>1) in CPU topology
# cat esx6.7-rhel6.9-x86_64/esx6.7-rhel6.9-x86_64.vmx
....
numvcpus = "9"
cpuid.coresPerSocket = "3"
....

3.4.1 Use virt-v2v to convert guest from vmx, the conversion could be finished without error/warning
# virt-v2v -i vmx esx6.7-rhel6.9-x86_64/esx6.7-rhel6.9-x86_64.vmx -on vmx-singular-cpu -of raw

3.4.2 Check guest's xml after converting, v2v can parse guest's cpu topology in xml correctly and power on guest normally
# virsh dumpxml vmx-singular-cpu
....
  <cpu>
    <topology sockets='3' cores='3' threads='1'/>
  </cpu>
....

___________________________________________________________________________

Hi Pino

   Pls help to check the result of scenario3.3->3.3.2, v2v CAN NOT parse guest's cpu topology correctly after converting guest from VMX to libvirt when original vmware guest has multiple sockets and 1 core in CPU topology, is it a same problem with bug1590079 ?

Comment 14 Pino Toscano 2018-06-26 10:22:31 UTC
First of all, thanks Ming Xie for the well-done testing!

(In reply to mxie from comment #13)
>    Pls help to check the result of scenario3.3->3.3.2, v2v CAN NOT parse
> guest's cpu topology correctly after converting guest from VMX to libvirt
> when original vmware guest has multiple sockets and 1 core in CPU topology,
> is it a same problem with bug1590079 ?

It might be -- can you please attach the .vmx file of this scenario?

Comment 15 mxie@redhat.com 2018-06-26 10:29:17 UTC
Created attachment 1454616 [details]
scenario3-3.vmx

Comment 16 Pino Toscano 2018-06-26 10:44:18 UTC
(In reply to mxie from comment #13)
>    Pls help to check the result of scenario3.3->3.3.2, v2v CAN NOT parse
> guest's cpu topology correctly after converting guest from VMX to libvirt
> when original vmware guest has multiple sockets and 1 core in CPU topology,
> is it a same problem with bug1590079 ?

Indeed attachment 1454616 [details] is another case of bug 1590079.

Comment 17 mxie@redhat.com 2018-06-26 11:25:46 UTC
Thanks for confirmation,according to comment13 ~comment16, move the bug from ON_QA to VERIFIED

Comment 19 errata-xmlrpc 2018-10-30 07:45:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:3021


Note You need to log in before you can comment on or make changes to this bug.