Hide Forgot
Created attachment 1405801 [details] Error in engine.log Description of problem: Fail to import VMWARE ova vm Version-Release number of selected component (if applicable): ovirt-engine 4.2.1.7 How reproducible: 100% Steps to Reproduce: Login ovirt manager -> click dashboard-> click virtual machine -> select import -> choose "vmware application (OVA)" and input path info Actual results: Failed to load VM configuration from OVA file: /mnt/available/qg-db01.ova Expected results: VM imported Additional info: Same ova file is imported in a VMWARE different than the source one.
Created attachment 1405802 [details] OVF file
Created attachment 1405803 [details] ova content
Please attach vdsm.log from the host or try in 4.2.2 first. Thanks
It's a production environment, I prefer to use official release. engine-upgrade-check say I'm up to date with Version 4.2.1.7-1.el7.centos The only relevant log in host vdsm are 2018-03-08 10:53:24,514+0100 INFO (jsonrpc/0) [api.host] START getExternalVmFromOva(ova_path=u'/mnt/available/qg-db01.ova') from=::ffff:94.177.173.236,60070, flow_id=403338be-16f8-419e-a6b8-d7d08c9c8f27 (api:46) 2018-03-08 10:53:24,524+0100 INFO (jsonrpc/0) [api.host] FINISH getExternalVmFromOva return={'status': {'message': 'Done', 'code': 0}, 'vmList': {'status': 'Down', 'disks': [{'allocation': None, 'capacity': '21474836480', 'type': 'disk', 'alias': 'qg-db01-1.vmdk'}, {'allocation': None, 'capacity': '75161927680', 'type': 'disk', 'alias': 'qg-db01-2.vmdk'}, {'allocation': None, 'capacity': '1937768448', 'type': 'disk', 'alias': 'qg-db01-3.vmdk'}], 'smp': 2, 'memSize': 5120, 'vmName': 'qg-db01', 'networks': [{'bridge': 'dvs01 sa-qg (289)', 'model': 'VmxNet3', 'type': 'bridge', 'dev': 'Network adapter 1'}]}} from=::ffff:94.177.173.236,60070, flow_id=403338be-16f8-419e-a6b8-d7d08c9c8f27 (api:52) 2018-03-08 10:53:24,524+0100 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getExternalVmFromOva succeeded in 0.01 seconds (__init__:573)
not sure if the allocation can be None, Arik?
(In reply to Michal Skrivanek from comment #5) > not sure if the allocation can be None, Arik? According to the specification, it may not be specified. But we currently require it since we don't have a mechanism that automatically enlarges disks during v2v. Marco, you would need to set the 'ovf:populatedSize' attribute on the Disk element. If you use file storage, its value is not that important, you can just set it to 1073741824 (1g). If you use block storage, its value should be closer to the real size of the disk - I suggest to take the size you see in 'ova content' and add another 10-15%. (if you use preallocated policy then the size is not important again).
(In reply to Arik from comment #6) > (In reply to Michal Skrivanek from comment #5) > > not sure if the allocation can be None, Arik? > > According to the specification, it may not be specified. But we currently > require it since we don't have a mechanism that automatically enlarges disks > during v2v. > > Marco, you would need to set the 'ovf:populatedSize' attribute on the Disk > element. > If you use file storage, its value is not that important, you can just set > it to 1073741824 (1g). > If you use block storage, its value should be closer to the real size of the > disk - I suggest to take the size you see in 'ova content' and add another > 10-15%. (if you use preallocated policy then the size is not important > again). Concretely, I would suggest the following changes in the OVF for block storage: <Disk ovf:capacityAllocationUnits="byte" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:diskId="vmdisk1" ovf:capacity="21474836480" ovf:populatedSize="4472941082" ovf:fileRef="file1"/> <Disk ovf:capacityAllocationUnits="byte" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:diskId="vmdisk2" ovf:capacity="75161927680" ovf:populatedSize="26825917594" ovf:fileRef="file2"/> <Disk ovf:capacityAllocationUnits="byte" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#streamOptimized" ovf:diskId="vmdisk3" ovf:capacity="1937768448" ovf:populatedSize="646422324" ovf:fileRef="file3"/> Marco, could you try again with those changes?
Hi Arik. Using your suggestion I was able to start the import of the vm. The vm appeared for one minute in ovirt manager list (down with a lock) When the import failed the vm disapperared.
Created attachment 1407740 [details] 2018-03-13 ovirt manager events
Created attachment 1407741 [details] 2018-03-13 vdsm.log host h1
Created attachment 1407742 [details] 2018-03-13 vdsm.log host h3
(In reply to Marco from comment #8) > Hi Arik. > Using your suggestion I was able to start the import of the vm. Great, so the configuration is alright now. > The vm appeared for one minute in ovirt manager list (down with a lock) > When the import failed the vm disapperared. Yeah, the conversion of the disks failed. To investigate this, we need the log of the import. In the VDSM log on the host that carried out the conversion you'll find a link to the relevant log file.
Hi Arik, I already attached logs: - attachment 1407741 [details] - host h1 has the original ova file - attachment 1407742 [details] - host h3 has the storage I used to import ova disk
(In reply to Marco from comment #13) > Hi Arik, > I already attached logs: > - attachment 1407741 [details] - host h1 has the original ova file > - attachment 1407742 [details] - host h3 has the storage I used to import > ova disk Right, I looked at those logs, neither of them include the information about the conversion. That would be in a separate log (not vdsm.log) that is referenced by vdsm.log.
I'm sorry, I searched in all logs of my h3 host without finding anything useful. Any suggestion about how to find the log of the conversion error?
(In reply to Marco from comment #16) > I'm sorry, I searched in all logs of my h3 host without finding anything > useful. > Any suggestion about how to find the log of the conversion error? You can try to grep the name of the OVA file you tried to import in /var/log/vdsm/
The only relevant info I found are already in attachment 1407742 [details], but it seems all fine... 2018-03-13 23:37:10,660+0100 INFO (jsonrpc/7) [vdsm.api] START createVolume(sdUUID=u'2d7de0ae-58a6-4823-a613-4be365165631', spUUID=u'5a6b4d51-01e4-0189-023d-000000000076', imgUUID=u'a9ffc836-314b-4466-b669-9758057329b3', size=u'21474836480', volFormat=5, preallocate=2, diskType=u'DATA', volUUID=u'8d3666d2-56f6-4f2d-be71-3502da39ff0e', desc=u'{"DiskAlias":"qg-db01-1.vmdk","DiskDescription":""}', srcImgUUID=u'00000000-0000-0000-0000-000000000000', srcVolUUID=u'00000000-0000-0000-0000-000000000000', initialSize=None) from=::ffff:94.177.173.236,36376, flow_id=6bfb2757-40de-43a2-9098-9ae0162cb982, task_id=a704b089-b4c9-4574-8cbb-f6b5c41b73c4 (api:46) 2018-03-13 23:37:10,719+0100 INFO (jsonrpc/7) [vdsm.api] FINISH createVolume return=None from=::ffff:94.177.173.236,36376, flow_id=6bfb2757-40de-43a2-9098-9ae0162cb982, task_id=a704b089-b4c9-4574-8cbb-f6b5c41b73c4 (api:52) 2018-03-13 23:37:10,958+0100 INFO (jsonrpc/0) [vdsm.api] START createVolume(sdUUID=u'2d7de0ae-58a6-4823-a613-4be365165631', spUUID=u'5a6b4d51-01e4-0189-023d-000000000076', imgUUID=u'29f99004-daa5-49b2-a194-3d9b25006064', size=u'75161927680', volFormat=5, preallocate=2, diskType=u'DATA', volUUID=u'89ba8480-7b56-4aa2-b4fa-42c172f8d015', desc=u'{"DiskAlias":"qg-db01-2.vmdk","DiskDescription":""}', srcImgUUID=u'00000000-0000-0000-0000-000000000000', srcVolUUID=u'00000000-0000-0000-0000-000000000000', initialSize=None) from=::ffff:94.177.173.236,36376, flow_id=6bfb2757-40de-43a2-9098-9ae0162cb982, task_id=e2f62fba-adc2-4b45-b4de-5a1e33571065 (api:46) 2018-03-13 23:37:11,038+0100 INFO (jsonrpc/0) [vdsm.api] FINISH createVolume return=None from=::ffff:94.177.173.236,36376, flow_id=6bfb2757-40de-43a2-9098-9ae0162cb982, task_id=e2f62fba-adc2-4b45-b4de-5a1e33571065 (api:52) 2018-03-13 23:37:11,379+0100 INFO (jsonrpc/4) [vdsm.api] START createVolume(sdUUID=u'2d7de0ae-58a6-4823-a613-4be365165631', spUUID=u'5a6b4d51-01e4-0189-023d-000000000076', imgUUID=u'ac3be5a4-9e7c-432e-8c7c-fe75bf6b8648', size=u'1937768448', volFormat=5, preallocate=2, diskType=u'DATA', volUUID=u'99db485f-834d-492b-8c1d-08f72e5a7f42', desc=u'{"DiskAlias":"qg-db01-3.vmdk","DiskDescription":""}', srcImgUUID=u'00000000-0000-0000-0000-000000000000', srcVolUUID=u'00000000-0000-0000-0000-000000000000', initialSize=None) from=::ffff:94.177.173.236,36376, flow_id=6bfb2757-40de-43a2-9098-9ae0162cb982, task_id=1fe4c4b4-8a36-46ba-b002-d171813e5224 (api:46) 2018-03-13 23:37:11,436+0100 INFO (jsonrpc/4) [vdsm.api] FINISH createVolume return=None from=::ffff:94.177.173.236,36376, flow_id=6bfb2757-40de-43a2-9098-9ae0162cb982, task_id=1fe4c4b4-8a36-46ba-b002-d171813e5224 (api:52) 2018-03-13 23:37:14,161+0100 INFO (jsonrpc/7) [vdsm.api] START getVolumeInfo(sdUUID=u'2d7de0ae-58a6-4823-a613-4be365165631', spUUID=u'5a6b4d51-01e4-0189-023d-000000000076', imgUUID=u'a9ffc836-314b-4466-b669-9758057329b3', volUUID=u'8d3666d2-56f6-4f2d-be71-3502da39ff0e', options=None) from=::ffff:94.177.173.236,36350, flow_id=6bfb2757-40de-43a2-9098-9ae0162cb982, task_id=28935fad-7250-4a5d-9db0-e6bac2d1a43f (api:46) 2018-03-13 23:37:14,203+0100 INFO (jsonrpc/7) [vdsm.api] FINISH getVolumeInfo return={'info': {'status': 'OK', 'domain': '2d7de0ae-58a6-4823-a613-4be365165631', 'voltype': 'LEAF', 'description': '{"DiskAlias":"qg-db01-1.vmdk","DiskDescription":""}', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'generation': 0, 'image': 'a9ffc836-314b-4466-b669-9758057329b3', 'ctime': '1520980631', 'disktype': 'DATA', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '21474836480', 'children': [], 'pool': '', 'capacity': '21474836480', 'uuid': u'8d3666d2-56f6-4f2d-be71-3502da39ff0e', 'truesize': '0', 'type': 'SPARSE', 'lease': {'owners': [], 'version': None}}} from=::ffff:94.177.173.236,36350, flow_id=6bfb2757-40de-43a2-9098-9ae0162cb982, task_id=28935fad-7250-4a5d-9db0-e6bac2d1a43f (api:52) 2018-03-13 23:37:14,224+0100 INFO (jsonrpc/0) [vdsm.api] START getVolumeInfo(sdUUID=u'2d7de0ae-58a6-4823-a613-4be365165631', spUUID=u'5a6b4d51-01e4-0189-023d-000000000076', imgUUID=u'ac3be5a4-9e7c-432e-8c7c-fe75bf6b8648', volUUID=u'99db485f-834d-492b-8c1d-08f72e5a7f42', options=None) from=::ffff:94.177.173.236,36350, flow_id=6bfb2757-40de-43a2-9098-9ae0162cb982, task_id=027c6cee-b0de-4177-b4f9-625f986c9811 (api:46) 2018-03-13 23:37:14,251+0100 INFO (jsonrpc/0) [vdsm.api] FINISH getVolumeInfo return={'info': {'status': 'OK', 'domain': '2d7de0ae-58a6-4823-a613-4be365165631', 'voltype': 'LEAF', 'description': '{"DiskAlias":"qg-db01-3.vmdk","DiskDescription":""}', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'generation': 0, 'image': 'ac3be5a4-9e7c-432e-8c7c-fe75bf6b8648', 'ctime': '1520980631', 'disktype': 'DATA', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1937768448', 'children': [], 'pool': '', 'capacity': '1937768448', 'uuid': u'99db485f-834d-492b-8c1d-08f72e5a7f42', 'truesize': '0', 'type': 'SPARSE', 'lease': {'owners': [], 'version': None}}} from=::ffff:94.177.173.236,36350, flow_id=6bfb2757-40de-43a2-9098-9ae0162cb982, task_id=027c6cee-b0de-4177-b4f9-625f986c9811 (api:52) 2018-03-13 23:37:14,271+0100 INFO (jsonrpc/4) [vdsm.api] START getVolumeInfo(sdUUID=u'2d7de0ae-58a6-4823-a613-4be365165631', spUUID=u'5a6b4d51-01e4-0189-023d-000000000076', imgUUID=u'29f99004-daa5-49b2-a194-3d9b25006064', volUUID=u'89ba8480-7b56-4aa2-b4fa-42c172f8d015', options=None) from=::ffff:94.177.173.236,36350, flow_id=6bfb2757-40de-43a2-9098-9ae0162cb982, task_id=7e410214-05b4-441a-89c9-34f97bc5fa9a (api:46) 2018-03-13 23:37:14,318+0100 INFO (jsonrpc/4) [vdsm.api] FINISH getVolumeInfo return={'info': {'status': 'OK', 'domain': '2d7de0ae-58a6-4823-a613-4be365165631', 'voltype': 'LEAF', 'description': '{"DiskAlias":"qg-db01-2.vmdk","DiskDescription":""}', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'generation': 0, 'image': '29f99004-daa5-49b2-a194-3d9b25006064', 'ctime': '1520980631', 'disktype': 'DATA', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '75161927680', 'children': [], 'pool': '', 'capacity': '75161927680', 'uuid': u'89ba8480-7b56-4aa2-b4fa-42c172f8d015', 'truesize': '0', 'type': 'SPARSE', 'lease': {'owners': [], 'version': None}}} from=::ffff:94.177.173.236,36350, flow_id=6bfb2757-40de-43a2-9098-9ae0162cb982, task_id=7e410214-05b4-441a-89c9-34f97bc5fa9a (api:52)
can we take the value from somewhere else if it's not in the expected property? any other workaround you can think of?
(In reply to Michal Skrivanek from comment #19) > can we take the value from somewhere else if it's not in the expected > property? any other workaround you can think of? Theoretically, when we query the OVF from the OVA, the script that we execute can also return the size of the disks's entry in the tar file, assuming that the header of that entry is correct (and I doubt that we can rely on it when the content is optimized-stream/compressed). IMHO, we should introduce a mechanism equivalent to the automatic extension of images by VDSM that would extend the disks during upload, assuming that the common use of virt-v2v will make use of the image-upload API. That way we won't have to know the size of the disks in advance.
(In reply to Marco from comment #18) > The only relevant info I found are already in attachment 1407742 [details], > but it seems all fine... Actually, the log file should reside on h1 if that's the host that the original OVA was placed on. Can you please attach the VDSM log on h1 that covers the time of the failed conversion (or try importing that OVA again if the log does not exist anymore)?
(In reply to Arik from comment #21) > (In reply to Marco from comment #18) > > The only relevant info I found are already in attachment 1407742 [details], > > but it seems all fine... > > Actually, the log file should reside on h1 if that's the host that the > original OVA was placed on. Can you please attach the VDSM log on h1 that > covers the time of the failed conversion (or try importing that OVA again if > the log does not exist anymore)? h1 vdsm log are already attached at present bug.
(In reply to Marco from comment #22) > h1 vdsm log are already attached at present bug. You mean attachment 1407741 [details]? it contains only two lines related to getExternalVmFromOva, right? that's not enough, need the full log
There's nothing else... if you need a more verbose log, please give me some indication about how to increase verbosity log level
(In reply to Marco from comment #25) > There's nothing else... if you need a more verbose log, please give me some > indication about how to increase verbosity log level No need to change the log's verbosity - VDSM log produces much more information than that by default. Without having more information on the failure during the conversion, we can only address the original issue of not having the size of the disk in the OVF, unfortunately.
I tried the import again. I found in vdsm.log a reference to an import file log. Within this last file I found the following error: /var/tmp/ova.Rz7nWE/qg-db01.mf (actual SHA256(qg-db01.ovf) = d4a6e766dc20f068c20395c412d32dd224dd9179d2f53540fdd5c3c937bdc712, expected SHA256(qg-db01.ovf) = bd3375b920f566f5f203d2eb0f68d349a46465d596e269760459a789cd93e282) The file qg-db01.mf contains hashes (but it's optional) so I deleted it from the ova file. After that, the import succeeded. No more number format error.
(In reply to Marco from comment #27) > I tried the import again. I found in vdsm.log a reference to an import file > log. > Within this last file I found the following error: > /var/tmp/ova.Rz7nWE/qg-db01.mf (actual SHA256(qg-db01.ovf) = > d4a6e766dc20f068c20395c412d32dd224dd9179d2f53540fdd5c3c937bdc712, expected > SHA256(qg-db01.ovf) = > bd3375b920f566f5f203d2eb0f68d349a46465d596e269760459a789cd93e282) > > The file qg-db01.mf contains hashes (but it's optional) so I deleted it from > the ova file. > > After that, the import succeeded. No more number format error. Oh, right! we had to change the .mf file after modifying the OVF... Good to hear that it works for you now. Reopening since comment #19 suggests having an alternative solution to modifying the OVF manually.
seems esxi6.5 is creating such OVFs, so we need to solve it on oVirt side. If there is no ovf:populatedSize just use ovf:capacity? I wouldn't add any buffer as I would expect this is the physical size of the volume and there is enough avilable space on the actual filesystem inside
*** Bug 1572871 has been marked as a duplicate of this bug. ***
(In reply to Michal Skrivanek from comment #29) > seems esxi6.5 is creating such OVFs, so we need to solve it on oVirt side. > If there is no ovf:populatedSize just use ovf:capacity? I wouldn't do that, it means preallocating the entire disk on block storage. > I wouldn't add any buffer as I would expect this is the physical size of the > volume and there is enough available space on the actual filesystem inside Sorry, I don't understand this part. Nisim, could you please generate such an OVA? I would like to see whether vSphere sets the right sizes in the tar headers (I doubt that though). Assuming the tar headers won't be set with the right sizes, I propose the following workaround (that would work fine for the general case of OVA files with a single disk inside): 1. The query_ova.py script [1] would extract the OVF configuration. 2. It will check whether the populated size is set within the OVF configuration. 3. If populated sizes do not exist: 3.1. Fetch the size of the OVA file. 3.2. If the disk is not compressed, return that size with buffer of 15%. 3.3. If the disk is compressed, return 3 times the size of the disk and (optionally) after the conversion is done, shrink the disk. [1] https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/roles/ovirt-ova-query/files/query_ova.py
Setting needinfo on Michal again. In the case of OVA folder, the same procedure described in comment 31 can be applied with the size of the disk file rather than the whole OVA file.
so you have ovf:capacity as a mandatory field, that corresponds to the virtual size, and optional ovf:populatedSize. The source file has optional ovf:size which should be what you're looking for. Since it is optional the only other thing we have is the file size indeed. When compression is used the ovf:size should match the expanded size IIUC. We also read ovf:actual_size which is not part of the standard but has the semantics of ovf:populatedSize. In that case we do need extra space on block storage, but I would move from percentage to a fixed space, 200MB should be plenty.
(In reply to Michal Skrivanek from comment #35) > so you have ovf:capacity as a mandatory field, that corresponds to the > virtual size, and optional ovf:populatedSize. The source file has optional > ovf:size which should be what you're looking for. Since it is optional the > only other thing we have is the file size indeed. > When compression is used the ovf:size should match the expanded size IIUC. "When compression is used, the ovf:size attribute shall specify the size of the actual compressed file" I understand it differently, that it specifies the size of the compressed file. > > We also read ovf:actual_size which is not part of the standard but has the > semantics of ovf:populatedSize. Right, that should have been "ovirt:actualSize" instead... > > In that case we do need extra space on block storage, but I would move from > percentage to a fixed space, 200MB should be plenty. In all the examples of such OVAs, I saw that the disk format is optimized stream, which iiuc is compressed, so I think this is the interesting scenario we should focus on. I have no hunch for the efficiency of the compression algorithm in that case, so maybe we should have some computation of, e.g., the minimum between three times the actual file size and the virtual size?
Verified: vdsm-4.20.31-1.el7ev.x86_64 libvirt-client-3.9.0-14.el7_5.6.x86_64 qemu-kvm-rhev-2.10.0-21.el7_5.4.x86_64 sanlock-3.6.0-1.el7.x86_64 virt-v2v-1.36.10-6.10.rhvpreview.el7ev.x86_64 Verification scenario: 1. Import OVA with only ovf:capacity and without ovf:populatedSize and ovf:size 2. Verify import succeed, run VM and verify VM is running properly.
This bugzilla is included in oVirt 4.2.4 release, published on June 26th 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.4 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.