Bug 1607736 - Creating template failed using a snapshot of virtual machine, while the snapshot's memory state is true.
Summary: Creating template failed using a snapshot of virtual machine, while the snaps...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.2.3
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ovirt-4.2.7
: ---
Assignee: Eyal Shenitzky
QA Contact: Elad
URL:
Whiteboard:
: 1626426 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-24 07:06 UTC by wang_meng@massclouds.com
Modified: 2018-12-23 14:27 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-02 14:38:01 UTC
oVirt Team: Storage
Embargoed:
rule-engine: ovirt-4.2+
ebenahar: testing_plan_complete+


Attachments (Terms of Use)
The ovirt logs including ovirt-engine.log and vdsm.log (1.96 MB, text/plain)
2018-08-08 08:40 UTC, wang_meng@massclouds.com
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 94034 0 None None None 2018-09-05 10:15:02 UTC
oVirt gerrit 94037 0 None None None 2018-09-05 10:14:38 UTC

Description wang_meng@massclouds.com 2018-07-24 07:06:56 UTC
Description of problem:
    Creating template failed using a snapshot of virtul machine, while the snapshot's memory state is true. The Event of virtul machine says:
    Failed to create Template snap1_template or its disks from VM <UNKNOWN>

Version-Release number of selected component (if applicable):
    4.2.3

How reproducible:
    always

Steps to Reproduce:
1. Create a virtul machine named centos_vm, then install centos.
2. Start the virtul machine (centos_vm)
3. Create a snapshot of centos_vm named centos_vm_snap1. When you 
   create the snapshot, select "keep memory" option.
4. Select the created snapshot, click the "create template" button
   to create a template.
   
Actual results:
   The template creation failed.

Expected results:
   The template creation success.

Additional info:
   The weird thing is that you can clone the snapshot successfully at the same
condition. So I trace the log of vdsm module between those. 
   It seems that the only difference is the parameter srcVolUUID when calling function named copyImage.
   When you create template, the  srcVolUUID is the volume being used, so "qemu-img convert" command always failed. I guess that's the reason why the template creating failed.
   The next is the key log.
(1)The key log of clone:
2018-07-23 17:03:41,650+0800 INFO  (jsonrpc/7) [vdsm.api] START copyImage(sdUUID=u'3f8ab5a8-1310-406b-a43a-5bb1f66bc64d', spUUID=u'aa434ca2-8b94-11e8-90e3-00163e6a7752', vmUUID='', srcImgUUID=u'f3156af3-a1ea-40f3-84d2-233a25c087f1', srcVolUUID=u'3a558897-fa61-4a6e-ab7e-28a2d0783e28', dstImgUUID=u'07d6524a-e7e3-4ec2-af5f-0b3678a90e2f', dstVolUUID=u'67a5e08d-e856-4434-802c-b323b93d6c1a', description=u'', dstSdUUID=u'3f8ab5a8-1310-406b-a43a-5bb1f66bc64d', volType=8, volFormat=5, preallocate=2, postZero=u'false', force=u'false', discard=False) from=::ffff:192.168.105.59,44102, flow_id=c2c9ea84-2079-4184-b89c-e9f8e6b8466f, task_id=2887c7bd-616d-47ca-b72b-d974c0822e78 (api:46)
2018-07-23 17:03:41,659+0800 INFO  (jsonrpc/7) [storage.Image] image f3156af3-a1ea-40f3-84d2-233a25c087f1 in domain 3f8ab5a8-1310-406b-a43a-5bb1f66bc64d has vollist [u'154a5b7d-54c6-45ed-a508-80e07ab2fded', u'3a558897-fa61-4a6e-ab7e-28a2d0783e28'] (image:311)
2018-07-23 17:03:41,680+0800 INFO  (jsonrpc/7) [storage.Image] Current chain=3a558897-fa61-4a6e-ab7e-28a2d0783e28 < 154a5b7d-54c6-45ed-a508-80e07ab2fded (top)  (image:698)
2018-07-23 17:03:41,716+0800 INFO  (jsonrpc/7) [vdsm.api] FINISH copyImage return=None from=::ffff:192.168.105.59,44102, flow_id=c2c9ea84-2079-4184-b89c-e9f8e6b8466f, task_id=2887c7bd-616d-47ca-b72b-d974c0822e78 (api:52)
2018-07-23 17:03:41,784+0800 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Volume.copy succeeded in 0.13 seconds (__init__:573)
2018-07-23 17:03:41,784+0800 INFO  (tasks/2) [storage.ThreadPool.WorkerThread] START task 2887c7bd-616d-47ca-b72b-d974c0822e78 (cmd=<bound method Task.commit of <vdsm.storage.task.Task instance at 0x7f33e4714ab8>>, args=None) (threadPool:208)
2018-07-23 17:03:41,851+0800 INFO  (tasks/2) [storage.Image] sdUUID=3f8ab5a8-1310-406b-a43a-5bb1f66bc64d vmUUID= srcImgUUID=f3156af3-a1ea-40f3-84d2-233a25c087f1 srcVolUUID=3a558897-fa61-4a6e-ab7e-28a2d0783e28 dstImgUUID=07d6524a-e7e3-4ec2-af5f-0b3678a90e2f dstVolUUID=67a5e08d-e856-4434-802c-b323b93d6c1a dstSdUUID=3f8ab5a8-1310-406b-a43a-5bb1f66bc64d volType=8 volFormat=RAW preallocate=SPARSE force=False postZero=False discard=False (image:720)
2018-07-23 17:03:41,860+0800 INFO  (tasks/2) [storage.VolumeManifest] Volume: preparing volume 3f8ab5a8-1310-406b-a43a-5bb1f66bc64d/3a558897-fa61-4a6e-ab7e-28a2d0783e28 (volume:559)
2018-07-23 17:03:41,864+0800 INFO  (tasks/2) [storage.Image] copy source 3f8ab5a8-1310-406b-a43a-5bb1f66bc64d:f3156af3-a1ea-40f3-84d2-233a25c087f1:3a558897-fa61-4a6e-ab7e-28a2d0783e28 size 6291456 blocks destination 3f8ab5a8-1310-406b-a43a-5bb1f66bc64d:07d6524a-e7e3-4ec2-af5f-0b3678a90e2f:67a5e08d-e856-4434-802c-b323b93d6c1a allocating 6291456 blocks (image:763)
2018-07-23 17:03:41,865+0800 INFO  (tasks/2) [storage.Image] image 07d6524a-e7e3-4ec2-af5f-0b3678a90e2f in domain 3f8ab5a8-1310-406b-a43a-5bb1f66bc64d has vollist [] (image:311)
2018-07-23 17:03:41,866+0800 INFO  (tasks/2) [storage.StorageDomain] Create placeholder /rhev/data-center/mnt/glusterSD/chost55.node:_high__pool/3f8ab5a8-1310-406b-a43a-5bb1f66bc64d/images/07d6524a-e7e3-4ec2-af5f-0b3678a90e2f for image's volumes (sd:1244)
2018-07-23 17:03:41,918+0800 INFO  (tasks/2) [storage.Volume] Creating volume 67a5e08d-e856-4434-802c-b323b93d6c1a (volume:1185)
2018-07-23 17:03:42,038+0800 INFO  (tasks/2) [storage.Volume] Request to create RAW volume /rhev/data-center/mnt/glusterSD/chost55.node:_high__pool/3f8ab5a8-1310-406b-a43a-5bb1f66bc64d/images/07d6524a-e7e3-4ec2-af5f-0b3678a90e2f/67a5e08d-e856-4434-802c-b323b93d6c1a with size = 20480 sectors (fileVolume:462)
2018-07-23 17:03:42,038+0800 INFO  (tasks/2) [storage.Volume] Changing volume u'/rhev/data-center/mnt/glusterSD/chost55.node:_high__pool/3f8ab5a8-1310-406b-a43a-5bb1f66bc64d/images/07d6524a-e7e3-4ec2-af5f-0b3678a90e2f/67a5e08d-e856-4434-802c-b323b93d6c1a' permission to 0660 (fileVolume:479)
2018-07-23 17:03:42,227+0800 INFO  (tasks/2) [storage.VolumeManifest] Volume: preparing volume 3f8ab5a8-1310-406b-a43a-5bb1f66bc64d/67a5e08d-e856-4434-802c-b323b93d6c1a (volume:559)
2018-07-23 17:03:42,232+0800 WARN  (tasks/2) [QemuImg] yyyy cmd(['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', u'/rhev/data-center/mnt/glusterSD/chost55.node:_high__pool/3f8ab5a8-1310-406b-a43a-5bb1f66bc64d/images/f3156af3-a1ea-40f3-84d2-233a25c087f1/3a558897-fa61-4a6e-ab7e-28a2d0783e28', '-O', 'raw', u'/rhev/data-center/mnt/glusterSD/chost55.node:_high__pool/3f8ab5a8-1310-406b-a43a-5bb1f66bc64d/images/07d6524a-e7e3-4ec2-af5f-0b3678a90e2f/67a5e08d-e856-4434-802c-b323b93d6c1a']) (qemuimg:222)
(2)The key log of template:
4054 2018-07-23 17:13:18,572+0800 INFO  (jsonrpc/1) [vdsm.api] START copyImage(sdUUID=u'3f8ab5a8-1310-406b-a43a-5bb1f66bc64d', spUUID=u'aa434ca2-8b94-11e8-90e3-00163e6a7752', vmUUID='', srcImgUUID=u'f3     156af3-a1ea-40f3-84d2-233a25c087f1', srcVolUUID=u'154a5b7d-54c6-45ed-a508-80e07ab2fded', dstImgUUID=u'd13e3331-c1f9-4178-a00a-9a46c1c79169', dstVolUUID=u'fa7db252-fee1-4ee3-a21c-a4e7a46b6e2b', des     cription=u'{"DiskAlias":"centos_Disk1","DiskDescription":""}', dstSdUUID=u'3f8ab5a8-1310-406b-a43a-5bb1f66bc64d', volType=6, volFormat=5, preallocate=2, postZero=u'false', force=u'false', discard=     False) from=::ffff:192.168.105.59,44102, flow_id=95e1b052-af9d-420d-a807-984740ec6eab, task_id=d08fa1d7-4393-404a-bc97-81889520f1f3 (api:46)
4055 2018-07-23 17:13:18,581+0800 INFO  (jsonrpc/1) [storage.Image] image f3156af3-a1ea-40f3-84d2-233a25c087f1 in domain 3f8ab5a8-1310-406b-a43a-5bb1f66bc64d has vollist [u'154a5b7d-54c6-45ed-a508-80e0     7ab2fded', u'3a558897-fa61-4a6e-ab7e-28a2d0783e28'] (image:311)
4056 2018-07-23 17:13:18,603+0800 INFO  (jsonrpc/1) [storage.Image] Current chain=3a558897-fa61-4a6e-ab7e-28a2d0783e28 < 154a5b7d-54c6-45ed-a508-80e07ab2fded (top)  (image:698)
4057 2018-07-23 17:13:18,605+0800 INFO  (jsonrpc/1) [IOProcessClient] Starting client ioprocess-25 (__init__:308)
4058 2018-07-23 17:13:18,614+0800 INFO  (ioprocess/30772) [IOProcess] Starting ioprocess (__init__:437)
4059 2018-07-23 17:13:18,650+0800 INFO  (jsonrpc/1) [vdsm.api] FINISH copyImage return=None from=::ffff:192.168.105.59,44102, flow_id=95e1b052-af9d-420d-a807-984740ec6eab, task_id=d08fa1d7-4393-404a-bc     97-81889520f1f3 (api:52)
4060 2018-07-23 17:13:18,719+0800 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Volume.copy succeeded in 0.15 seconds (__init__:573)
4061 2018-07-23 17:13:18,720+0800 INFO  (tasks/1) [storage.ThreadPool.WorkerThread] START task d08fa1d7-4393-404a-bc97-81889520f1f3 (cmd=<bound method Task.commit of <vdsm.storage.task.Task instance at      0x7f33e44b2f38>>, args=None) (threadPool:208)
4062 2018-07-23 17:13:18,773+0800 INFO  (tasks/1) [storage.Image] sdUUID=3f8ab5a8-1310-406b-a43a-5bb1f66bc64d vmUUID= srcImgUUID=f3156af3-a1ea-40f3-84d2-233a25c087f1 srcVolUUID=154a5b7d-54c6-45ed-a508-     80e07ab2fded dstImgUUID=d13e3331-c1f9-4178-a00a-9a46c1c79169 dstVolUUID=fa7db252-fee1-4ee3-a21c-a4e7a46b6e2b dstSdUUID=3f8ab5a8-1310-406b-a43a-5bb1f66bc64d volType=6 volFormat=RAW preallocate=SPAR     SE force=False postZero=False discard=False (image:720)
4063 2018-07-23 17:13:18,782+0800 INFO  (tasks/1) [storage.VolumeManifest] Volume: preparing volume 3f8ab5a8-1310-406b-a43a-5bb1f66bc64d/154a5b7d-54c6-45ed-a508-80e07ab2fded (volume:559)
4064 2018-07-23 17:13:18,788+0800 INFO  (tasks/1) [storage.VolumeManifest] Volume: preparing volume 3f8ab5a8-1310-406b-a43a-5bb1f66bc64d/3a558897-fa61-4a6e-ab7e-28a2d0783e28 (volume:559)
4065 2018-07-23 17:13:18,791+0800 INFO  (tasks/1) [storage.Image] copy source 3f8ab5a8-1310-406b-a43a-5bb1f66bc64d:f3156af3-a1ea-40f3-84d2-233a25c087f1:154a5b7d-54c6-45ed-a508-80e07ab2fded size 6291456      blocks destination 3f8ab5a8-1310-406b-a43a-5bb1f66bc64d:d13e3331-c1f9-4178-a00a-9a46c1c79169:fa7db252-fee1-4ee3-a21c-a4e7a46b6e2b allocating 6291456 blocks (image:763)
4066 2018-07-23 17:13:18,792+0800 INFO  (tasks/1) [storage.Image] image d13e3331-c1f9-4178-a00a-9a46c1c79169 in domain 3f8ab5a8-1310-406b-a43a-5bb1f66bc64d has vollist [] (image:311)
4067 2018-07-23 17:13:18,793+0800 INFO  (tasks/1) [storage.StorageDomain] Create placeholder /rhev/data-center/mnt/glusterSD/chost55.node:_high__pool/3f8ab5a8-1310-406b-a43a-5bb1f66bc64d/images/d13e333     1-c1f9-4178-a00a-9a46c1c79169 for image's volumes (sd:1244)
4068 2018-07-23 17:13:18,840+0800 INFO  (tasks/1) [storage.Volume] Creating volume fa7db252-fee1-4ee3-a21c-a4e7a46b6e2b (volume:1185)
4069 2018-07-23 17:13:18,972+0800 INFO  (tasks/1) [storage.Volume] Request to create RAW volume /rhev/data-center/mnt/glusterSD/chost55.node:_high__pool/3f8ab5a8-1310-406b-a43a-5bb1f66bc64d/images/d13e     3331-c1f9-4178-a00a-9a46c1c79169/fa7db252-fee1-4ee3-a21c-a4e7a46b6e2b with size = 20480 sectors (fileVolume:462)
4070 2018-07-23 17:13:18,973+0800 INFO  (tasks/1) [storage.Volume] Changing volume u'/rhev/data-center/mnt/glusterSD/chost55.node:_high__pool/3f8ab5a8-1310-406b-a43a-5bb1f66bc64d/images/d13e3331-c1f9-4     178-a00a-9a46c1c79169/fa7db252-fee1-4ee3-a21c-a4e7a46b6e2b' permission to 0660 (fileVolume:479)
4071 2018-07-23 17:13:19,181+0800 INFO  (tasks/1) [storage.VolumeManifest] Volume: preparing volume 3f8ab5a8-1310-406b-a43a-5bb1f66bc64d/fa7db252-fee1-4ee3-a21c-a4e7a46b6e2b (volume:559)
4072 2018-07-23 17:13:19,187+0800 WARN  (tasks/1) [QemuImg] yyyy cmd(['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'qcow2', u'/rhev/data-center/mnt/glusterSD/chost55.node:_hi     gh__pool/3f8ab5a8-1310-406b-a43a-5bb1f66bc64d/images/f3156af3-a1ea-40f3-84d2-233a25c087f1/154a5b7d-54c6-45ed-a508-80e07ab2fded', '-O', 'raw', u'/rhev/data-center/mnt/glusterSD/chost55.node:_high__     pool/3f8ab5a8-1310-406b-a43a-5bb1f66bc64d/images/d13e3331-c1f9-4178-a00a-9a46c1c79169/fa7db252-fee1-4ee3-a21c-a4e7a46b6e2b']) (qemuimg:222)
...........................................................................cut some log................................................
4081 2018-07-23 17:13:19,287+0800 INFO  (jsonrpc/2) [api] FINISH getStats error=Virtual machine does not exist: {'vmId': u'cc61c254-bea9-4de6-87ec-647f3b5463a4'} (api:127)
4091     for data in self._operation.watch():
4094   File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 178, in _finalize
4097 2018-07-23 17:13:19,348+0800 ERROR (tasks/1) [storage.Image] Unexpected error (image:835)
4098 Traceback (most recent call last):
4099   File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line 823, in copyCollapsed
4100     raise se.CopyImageError(str(e))
4101 CopyImageError: low level Image copy failed: ('Command [\'/usr/bin/taskset\', \'--cpu-list\', \'0-47\', \'/usr/bin/nice\', \'-n\', \'19\', \'/usr/bin/ionice\', \'-c\', \'3\', \'/usr/bin/qemu-img\'     , \'convert\', \'-p\', \'-t\', \'none\', \'-T\', \'none\', \'-f\', \'qcow2\', u\'/rhev/data-center/mnt/glusterSD/chost55.node:_high__pool/3f8ab5a8-1310-406b-a43a-5bb1f66bc64d/images/f3156af3-a1ea-     40f3-84d2-233a25c087f1/154a5b7d-54c6-45ed-a508-80e07ab2fded\', \'-O\', \'raw\', u\'/rhev/data-center/mnt/glusterSD/chost55.node:_high__pool/3f8ab5a8-1310-406b-a43a-5bb1f66bc64d/images/d13e3331-c1f     9-4178-a00a-9a46c1c79169/fa7db252-fee1-4ee3-a21c-a4e7a46b6e2b\'] failed with rc=1 out=\'\' err=bytearray(b\'qemu-img: Could not open \\\'/rhev/data-center/mnt/glusterSD/chost55.node:_high__pool/3f     8ab5a8-1310-406b-a43a-5bb1f66bc64d/images/f3156af3-a1ea-40f3-84d2-233a25c087f1/154a5b7d-54c6-45ed-a508-80e07ab2fded\\\': Failed to get shared "write" lock\\nIs another process using the image?\\n\     ')',)
4102 2018-07-23 17:13:19,348+0800 ERROR (tasks/1) [storage.TaskManager.Task] (Task='d08fa1d7-4393-404a-bc97-81889520f1f3') Unexpected error (task:875)
4103 Traceback (most recent call last):
4104   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
4105     return fn(*args, **kargs)
4106   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 336, in run
4107     return self.cmd(*self.argslist, **self.argsdict)
4108   File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper
4109     return method(self, *args, **kwargs)
4110   File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1623, in copyImage
4111     postZero, force, discard)
4112   File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line 823, in copyCollapsed
4113     raise se.CopyImageError(str(e))
4114 CopyImageError: low level Image copy failed: ('Command [\'/usr/bin/taskset\', \'--cpu-list\', \'0-47\', \'/usr/bin/nice\', \'-n\', \'19\', \'/usr/bin/ionice\', \'-c\', \'3\', \'/usr/bin/qemu-img\'     , \'convert\', \'-p\', \'-t\', \'none\', \'-T\', \'none\', \'-f\', \'qcow2\', u\'/rhev/data-center/mnt/glusterSD/chost55.node:_high__pool/3f8ab5a8-1310-406b-a43a-5bb1f66bc64d/images/f3156af3-a1ea-     40f3-84d2-233a25c087f1/154a5b7d-54c6-45ed-a508-80e07ab2fded\', \'-O\', \'raw\', u\'/rhev/data-center/mnt/glusterSD/chost55.node:_high__pool/3f8ab5a8-1310-406b-a43a-5bb1f66bc64d/images/d13e3331-c1f     9-4178-a00a-9a46c1c79169/fa7db252-fee1-4ee3-a21c-a4e7a46b6e2b\'] failed with rc=1 out=\'\' err=bytearray(b\'qemu-img: Could not open \\\'/rhev/data-center/mnt/glusterSD/chost55.node:_high__pool/3f     8ab5a8-1310-406b-a43a-5bb1f66bc64d/images/f3156af3-a1ea-40f3-84d2-233a25c087f1/154a5b7d-54c6-45ed-a508-80e07ab2fded\\\': Failed to get shared "write" lock\\nIs another process using the image?\\n\     ')',)
4115 2018-07-23 17:13:19,502+0800 INFO  (tasks/1) [storage.Volume] createVolumeRollback: repoPath=/rhev/data-center/aa434ca2-8b94-11e8-90e3-00163e6a7752 sdUUID=3f8ab5a8-1310-406b-a43a-5bb1f66bc64d imgU     UID=d13e3331-c1f9-4178-a00a-9a46c1c79169 volUUID=fa7db252-fee1-4ee3-a21c-a4e7a46b6e2b imageDir=/rhev/data-center/mnt/glusterSD/chost55.node:_high__pool/3f8ab5a8-1310-406b-a43a-5bb1f66bc64d/images/     d13e3331-c1f9-4178-a00a-9a46c1c79169 (volume:1111)

Comment 1 Ala Hino 2018-07-24 13:14:12 UTC
Hi,

Could you please share the version of qemu (rpm -qa | grep qemu)?

Comment 2 wang_meng@massclouds.com 2018-07-25 00:33:55 UTC
[root@chost155 ~]# rpm -qa | grep qemu; 
qemu-guest-agent-2.8.0-2.el7.x86_64
qemu-img-ev-2.10.0-21.el7_5.3.1.x86_64
vdsm-hook-qemucmdline-4.20.23-201807231106.noarch
ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch
qemu-kvm-ev-2.10.0-21.el7_5.3.1.x86_64
qemu-kvm-common-ev-2.10.0-21.el7_5.3.1.x86_64
vdsm-hook-faqemu-4.20.23-201807231106.noarch
libvirt-daemon-driver-qemu-3.9.0-14.el7_5.5.x86_64
[root@chost155 ~]# rpm -qa | grep qemu
qemu-guest-agent-2.8.0-2.el7.x86_64
qemu-img-ev-2.10.0-21.el7_5.3.1.x86_64
vdsm-hook-qemucmdline-4.20.23-201807231106.noarch
ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch
qemu-kvm-ev-2.10.0-21.el7_5.3.1.x86_64
qemu-kvm-common-ev-2.10.0-21.el7_5.3.1.x86_64
vdsm-hook-faqemu-4.20.23-201807231106.noarch
libvirt-daemon-driver-qemu-3.9.0-14.el7_5.5.x86_64

[root@chost155 ~]# cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core)

Comment 3 Tal Nisan 2018-07-31 11:01:40 UTC
This bug probably exists since the introduction of memory snapshots, indeed a template can not have memory in its active and only snapshot and they memory disks should be ignored while creating the template

Comment 4 shani 2018-08-07 15:34:46 UTC
Hi,

Can you please attach full engine and vdsm logs, which include the whole process (from the VM's creation to the failure you get)?

Also, can you please attach the output of 'lslocks' from the VM's host?

Comment 5 wang_meng@massclouds.com 2018-08-08 08:40:26 UTC
Created attachment 1474194 [details]
The ovirt logs including ovirt-engine.log and vdsm.log

    The logs in the package are ovirt-engine.log and vdsm.log. The chost55*.log is produced by the host where the VM runs. The chost57*.log is produced by the spm host. You could search "centos_vm" to find useful infos.

Comment 6 wang_meng@massclouds.com 2018-08-08 08:43:57 UTC
Hi,
    1. "Can you please attach full engine and vdsm logs, which include the whole process (from the VM's creation to the failure you get)"
      I created a VM named centos_vm and a snapshot named centos_vm_snapshot,
and last a template named centos_vm_template. (See attachment)

    2."can you please attach the output of 'lslocks' from the VM's host?"
[root@chost55 vdsm]# lslocks 
COMMAND           PID  TYPE SIZE MODE  M START END PATH
virtlogd         7439 POSIX   4B WRITE 0     0   0 /run/virtlogd.pid
sanlock          1698 POSIX   5B WRITE 0     0   0 /run/sanlock/sanlock.pid
dmeventd         1425 POSIX   5B WRITE 0     0   0 /run/dmeventd.pid
iscsid           2510 POSIX   5B WRITE 0     0   0 /run/iscsid.pid
master           2906 FLOCK  33B WRITE 0     0   0 /var/spool/postfix/pid/master.pid
master           2906 FLOCK  33B WRITE 0     0   0 /var/lib/postfix/master.lock
glusterfsd      44479 POSIX   6B WRITE 0     0   0 /run/gluster/vols/high_pool/chost55.node-gluster-bricks-sda_data-sda_data.pid
wdmd             1757 POSIX   5B WRITE 0     0   0 /run/wdmd/wdmd.pid
glusterd        28289 POSIX   6B WRITE 0     0   0 /run/glusterd.pid
ovsdb-server    19381 POSIX   6B WRITE 0     0   0 /run/openvswitch/ovsdb-server.pid
ovsdb-server    19381 POSIX   0B WRITE 0     0   0 /etc/openvswitch/.conf.db.~lock~
glusterfsd      44489 POSIX   6B WRITE 0     0   0 /run/gluster/vols/standard_pool/chost55.node-gluster-bricks-sdd_data-sdd_data.pid
python           2503 FLOCK   5B WRITE 0     0   0 /run/glustereventsd.pid
rhsmcertd        2505 FLOCK   0B WRITE 0     0   0 /run/lock/subsys/rhsmcertd
crond            2719 FLOCK   5B WRITE 0     0   0 /run/crond.pid
ovn-controller  19529 POSIX   6B WRITE 0     0   0 /run/openvswitch/ovn-controller.pid
glusterfs        7126 POSIX   5B WRITE 0     0   0 /run/gluster/glustershd/glustershd.pid
multipathd       1188 POSIX   4B WRITE 0     0   0 /run/multipathd/multipathd.pid
libvirtd         2711 POSIX   4B WRITE 0     0   0 /run/libvirtd.pid
ovs-vswitchd    19443 POSIX   6B WRITE 0     0   0 /run/openvswitch/ovs-vswitchd.pid
abrtd            1735 POSIX   5B WRITE 0     0   0 /run/abrt/abrtd.pid
vdsmd            3540 FLOCK   0B WRITE 0     0   0 /run/vdsm/vdsmd.lock
python           2502 FLOCK   4B WRITE 0     0   0 /run/goferd.pid
supervdsmd       2495 FLOCK   0B WRITE 0     0   0 /run/vdsm/supervdsmd.lock

Comment 7 wang_meng@massclouds.com 2018-08-08 08:49:20 UTC
Comment on attachment 1474194 [details]
The ovirt logs including ovirt-engine.log and vdsm.log

  The logs in the package is engine.log and vdsm.log. the chost55*.log is produced
by the host where the VM runs. The chost57*.log is produced by SPM host.

Comment 9 wang_meng@massclouds.com 2018-08-17 00:55:37 UTC
Hi,
  Are there some problems with the materials submitted?

Comment 12 Tal Nisan 2018-09-16 12:45:34 UTC
*** Bug 1626426 has been marked as a duplicate of this bug. ***

Comment 13 Elad 2018-09-17 12:48:09 UTC
Template creation, from a snapshot that has memory disk, succeeds.

Used:
ovirt-engine-4.2.6.5-0.0.master.20180914152430.gitb8a2050.el7.noarch
vdsm-4.20.39-15.gitae7d021.el7.x86_64

Comment 14 Raz Tamir 2018-09-17 13:30:20 UTC
QE verification bot: the bug was verified upstream

Comment 15 Sandro Bonazzola 2018-11-02 14:38:01 UTC
This bugzilla is included in oVirt 4.2.7 release, published on November 2nd 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.7 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.