Created attachment 1450825 [details] screenshot Description of problem: There is virt-v2v warning about CPU during p2v conversion if set vCPU number less than actual number of host CPU Version-Release number of selected component (if applicable): virt-p2v-1.38.2-1.el7.iso virt-v2v-1.38.2-3.el7.x86_64 libguestfs-1.38.2-3.el7.x86_64 libvirt-4.3.0-1.el7.x86_64 qemu-kvm-rhev-2.12.0-3.el7.x86_64 How reproducible: 100% Steps to Reproduce: 1.Boot host into virt-p2v client 2.Input conversion server info and test connection 3.Input conversion info interface.Need to set vcpu number less than actual number of host CPU,such as set vcpu=2 if host cpu=8 4.Start to convert but there is virt-v2v warning about CPU during p2v conversion,pls refer to screenshot and log Actual results: As above description Expected Results: There is no virt-v2v warning about CPU during p2v conversion if set vCPU number less than actual number of host vcpu Additional info:
Created attachment 1450826 [details] virt-p2v-vcpu-1.38.2-1
This happens because, starting from libguestfs 1.38, virt-v2v properly handles the CPU topology, verifying that number of sockets × number of cores per socket × number of threads per core = number of vCPUs Hence, editing the number of vCPUs makes the equation fail, and thus virt-v2v reports that. The question here is why virt-p2v lets the user change the number of vCPUs, since virt-v2v converts the guests preserving this information. So IMHO a solution can be to drop the editing of the number of vCPUs in p2v, since it will create this issue unless the topology is exactly 1 × 1 × 1. Otherwise, virt-p2v needs knobs to edit the CPU topology.
While I don't know if anyone has ever used that box, the thought behind it is that you might well have a physical server with lots of CPUs, but not want to (or not be able to) provision a VM with the same number of vCPUs. Remember that physical machines by definition have the same or more CPUs than virtual machines, and that many physical machines that do single server jobs are underutilitised.
(In reply to Richard W.M. Jones from comment #4) > While I don't know if anyone has ever used that box, the thought > behind it is that you might well have a physical server with > lots of CPUs, but not want to (or not be able to) provision a VM > with the same number of vCPUs. I understand this -- OTOH, reducing the number of vCPUs (or the amount of RAM) can break the guest, since it could not have anymore the resources it required when it was a physical machine. Is Windows OK when its amount of (v)CPUs changes, or does it require re-licensing?
Windows nearly always requires reactivation when doing a v2v or p2v, so I wouldn't worry about that. Large customers get around it by having a local KMS running (https://www.microsoft.com/Licensing/servicecenter/Help/FAQDetails.aspx?id=201#215)
(In reply to Richard W.M. Jones from comment #4) > While I don't know if anyone has ever used that box, the thought > behind it is that you might well have a physical server with > lots of CPUs, but not want to (or not be able to) provision a VM > with the same number of vCPUs. Remember that physical machines > by definition have the same or more CPUs than virtual machines, > and that many physical machines that do single server jobs are > underutilitised. I agree on that one. So v2v and p2v are a bit different (in this regards). So in case the topology is changed, I think a warning is totally fine (so that the user is aware), but we should still go ahead and do the migration. Thoughts?
Then, just like we allow the users to edit the number of vCPUs, should users be able to edit the CPU topology as well? Should p2v ask for confirmation when changing the CPU (and RAM too) configuration? E.g. "The CPU configuration of the machine was changed from $OLD to $NEW, which can result in an unbootable/unusable guest. Do you want to continue?"
(In reply to Pino Toscano from comment #8) > Then, just like we allow the users to edit the number of vCPUs, should users > be able to edit the CPU topology as well? I think so, yes. Presumably it's only the 3 fields sockets/cores/threads (probably to replace the vCPUs field since that would no longer be necessary). > Should p2v ask for confirmation when changing the CPU (and RAM too) > configuration? E.g. "The CPU configuration of the machine was changed from > $OLD to $NEW, which can result in an unbootable/unusable guest. Do you want > to continue?" Well, we've got away with it so far. But there is space for a warning message in the current UI IIRC, so if it's easy to add then yes.
Add another instance for this bug Packages: virt-v2v-1.38.2-8.el7.x86_64 libguestfs-1.38.2-8.el7.x86_64 virt-p2v-1.38.2-2.el7.iso libvirt-4.5.0-3.el7.x86_64 qemu-kvm-rhev-2.12.0-7.el7.x86_64 Steps: 1 Boot physical machine which has installed rhel7 OS into virt-p2v client 2 Input virt-v2v conversion server info and pass the test connection 3 Modify vCPU number less than total vCPUs number of host and input conversion info(-o libvirt -os default), then start conversion 4 The p2v conversion is finished with warning and there is no converted guest listed in default pool .... virt-v2v: warning: could not define libvirt domain. The libvirt XML is still available in ‘/tmp/v2vlibvirta46b27.xml’. Try running ‘virsh define /tmp/v2vlibvirta46b27.xml’ yourself instead .... 5 Check details error in v2v conversion log .... virsh 'define' '/tmp/v2vlibvirta46b27.xml' error: Failed to define domain from /tmp/v2vlibvirta46b27.xml error: unsupported configuration: CPU topology doesn't match maximum vcpu count .... Additional info Also can reproduce above problem if modify vCPUs number more than total vCPUs number of host before p2v conversion
Created attachment 1470258 [details] virt-p2v-modify-vcpu-to-libvirt-log
Add another instance for this bug (convert host to rhv4.2 with modifying vCPU before P2V converting ) Packages: virt-v2v-1.38.2-9.el7.x86_64 libguestfs-1.38.2-9.el7.x86_64 libvirt-4.5.0-4.el7.x86_64 qemu-kvm-rhev-2.12.0-8.el7.x86_64 virt-p2v-1.38.2-2.el7.iso rhv:4.2.5-0.1.el7ev Steps: 1.Install rhel7.5 on accraid host 2.Boot host into virt-p2v client 3.Input conversion server info and pass the test connection 4 Input conversion info and modify vCPU number as 1 ,then start convert 5.The conversion could be finished without error,import guest from export domain to data domain on rhv 6.But find number of CPU Cores of guest shows 4 in general info on rhv4.2, , pls refer to screenshot "vcpu-on-rhv" 7.check guest ovf info # cat 846ed6c9-f0e6-4c99-933f-2cc95b2ed9f3/846ed6c9-f0e6-4c99-933f-2cc95b2ed9f3.ovf .... <Item> <rasd:Caption>1 virtual cpu</rasd:Caption> <rasd:Description>Number of virtual CPU</rasd:Description> <rasd:InstanceId>1</rasd:InstanceId> <rasd:ResourceType>3</rasd:ResourceType> <rasd:num_of_sockets>1</rasd:num_of_sockets> <rasd:cpu_per_socket>4</rasd:cpu_per_socket> <rasd:threads_per_cpu>1</rasd:threads_per_cpu> </Item> ....
Created attachment 1470667 [details] vcpu-on-rhv.png
This bug will be addressed in next major release.
Hi Pino, Could you please help to check the problem of comment12? I can reproduce it with rhel7.8 p2v and rhel8.2 v2v, I think this problem is different with the bug, may I file a new bug to track it? Problem: the setting for vCPU for host will be lost after p2v conversion if the vCPU number of settings is less than the total Packages: virt-v2v-1.38.4-15.module+el8.2.0+5297+222a20af.x86_64 libguestfs-1.38.4-15.module+el8.2.0+5297+222a20af.x86_64 libvirt-4.5.0-40.module+el8.2.0+5761+d16d25e7.x86_64 qemu-kvm-2.12.0-98.module+el8.2.0+5698+10a84757.x86_64 virtio-win-1.9.10-3.el8.noarch virt-p2v-1.40.2-1.el7 Additional info: the setting for vCPU for host will be kept after p2v conversion if the vCPU number of settings is more than the total, pls refer to comment10
(In reply to mxie from comment #15) > Could you please help to check the problem of comment12? I can reproduce > it with rhel7.8 p2v and rhel8.2 v2v, I think this problem is different with > the bug, may I file a new bug to track it? No, it is the same issue, and it still not fixed.
Adding few more notes this bug, mostly to not forget about them. An additional thing to consider is that the number of vCPUs and the topology can be set independently using config parameters: p2v.vcpus, p2v.cpu.sockets, p2v.cpu.cores, p2v.cpu.threads. Most probably what we should do on topology mismatch with the vCPU count is: - in automated conversion (without GUI): error out directly before starting - in GUI mode: do not let the user start the conversion While an hard error is harsh, this is what virt-v2v does since at least 1.38, so letting the conversion start will just hit the issue later on.
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.
I apologise for the actions of the "stale" bug process above. This bug is not stale, and I am reopening it. All bugs are important.
My apologies, this bug was closed by a broken process that we do not have any control over. Reopening.
(CC Daniel) I don't think we should allow the P2V user to tweak the topology manually. VCPU topology is nuanced, libvirtd contains many smarts in that area, and we shouldn't try to duplicate them. I think we should offer two options in total: - stick with the physical machine topology (that's by definition a correct topology -- it exists in the physical world, so if libvirt cannot cope with it, it's a libvirt issue) - if the user touches the VCPU count (sets it to N), we should always generate a 1 socket / N cores / 1 thread topology in the generated domain XML (better yet, omit the topology information altogether, just produce a flat VCPU count). On the P2V UI (gtk2?), this would probably require a radio button. One branch would be "stick with physical CPU topology", the other branch would be "specify flat VCPU count". Feedback requested & welcome :)
(In reply to Laszlo Ersek from comment #28) > (CC Daniel) > > I don't think we should allow the P2V user to tweak the topology manually. > VCPU topology is nuanced, libvirtd contains many smarts in that area, and we > shouldn't try to duplicate them. I think we should offer two options in > total: > > - stick with the physical machine topology (that's by definition a correct > topology -- it exists in the physical world, so if libvirt cannot cope with > it, it's a libvirt issue) > > - if the user touches the VCPU count (sets it to N), we should always > generate a 1 socket / N cores / 1 thread topology in the generated domain > XML (better yet, omit the topology information altogether, just produce a > flat VCPU count). > > On the P2V UI (gtk2?), this would probably require a radio button. One > branch would be "stick with physical CPU topology", the other branch would > be "specify flat VCPU count". I agree this would be better.
I'd like to take a look at this later. I expect some incompatibility with previous p2v releases; see Pino's comment 17. Once we change the GUI per comment 29, the kernel cmdline (= headless / automated conversion) should follow suit.
[p2v PATCH 0/6] restrict vCPU topology to (a) fully populated physical, or (b) 1 * N * 1 Message-Id: <20220905112531.10654-1-lersek> https://listman.redhat.com/archives/libguestfs/2022-September/029806.html
[p2v PATCH v2 0/6] restrict vCPU topology to (a) fully populated physical, or (b) 1 * N * 1 Message-Id: <20220908162746.25031-1-lersek> https://listman.redhat.com/archives/libguestfs/2022-September/029845.html
(In reply to Laszlo Ersek from comment #32) > [p2v PATCH v2 0/6] restrict vCPU topology to (a) fully populated physical, or (b) 1 * N * 1 > Message-Id: <20220908162746.25031-1-lersek> > https://listman.redhat.com/archives/libguestfs/2022-September/029845.html Dropped patch#5 per DanPB's feedback under v1 -- https://listman.redhat.com/archives/libguestfs/2022-September/029841.html --, retested the resultant 5-part series, and merged it as commit range 0687cea6a86e..6f1adeba1aab.
LiveCD built at upstream commit 6f1adeba1aab: http://lacos.interhost.hu/livecd-p2v-202209111257.iso sha256sum: be70b5fb95ffc36d130f069a8eb64deb1bdfbc5614d7d70529014d8057feb2b0 livecd-p2v-202209111257.iso
Tried with the p2v iso in comment34 and with the versions: virt-p2v-1.42.2(x86_64) virt-v2v-2.0.7-6.el9.x86_64 libguestfs-1.48.4-2.el9.x86_64 libvirt-8.5.0-6.el9.x86_64 Host OS: RHEL8.6 and Windows2022 Steps: Scenario 1: To RHV with setting vCPUs < pCPUs 1.Boot host into virt-p2v client 2.Input conversion server info and test connection 3.Input conversion info interface. Not check "Copy fully populated pCPU topology" and set vCPUs less than pCPUs,such as set vCPUs=2 if pCPUs=8 4.Start to convert, no errors are shown during the conversion. 5.Go to RHV page, check if the number of vCPUs is exact equal with the setting one. And start VM on checkpoints. Scenario 2: To RHV with setting vCPUs = pCPUs 1.Boot host into virt-p2v client 2.Input conversion server info and test connection 3.Input conversion RHV info. Check "Copy fully populated pCPU topology" 4.Start to convert, no errors are shown during the conversion. 5.Go to RHV page, check if the number of vCPUs is the same as the one of pCPUs. And start VM on checkpoints. Scenario 3: To libvirt with setting vCPUs < pCPUs 1.Boot host into virt-p2v client 2.Input conversion server info and test connection 3.Input conversion libvirt info with "default" output storage. Not check "Copy fully populated pCPU topology" and set vCPUs less than pCPUs,such as set vCPUs=2 if pCPUs=8 4.Start to convert, no errors are shown during the conversion. 5.Go to the conversion server. check if the number of vCPUs is exact equal with the setting one. Scenario 4: To libvirt with setting vCPUs = pCPUs 1.Boot host into virt-p2v client 2.Input conversion server info and test connection 3.Input conversion RHV info. Check "Copy fully populated pCPU topology" 4.Start to convert, no errors are shown during the conversion. 5.Go to the conversion server, check if the number of vCPUs is the same as the one of pCPUs. All above scenarios are passed. No errors show during conversion.
Based on comment 36, the bug has been fixed and close the bug.