Created attachment 655990 [details] engine log [Storage] [Clone VM from snapshot] Destination VM's disk will be inactive if one of the source's disks is shared disk and it's inactive. How to reproduce: 1.VM with several disk. For example 3 disks. 2.One of the disks is shared disk and it's inactive. 3.Create snapshot. 4.Clone VM from created snapshot. You will get VM with 2 disks (shared disk not included in the snapshot) but one of the disks (regular disks) will be INACTIVE which will cause to VM's OS to crash on boot. Reproduced 100% on two different environments after upgrade. Doesn't happen on clean installation. engine log attached.
There are several things that are not clear in the bug: 1. What are the source and target versions of the upgrade? (from which version is upgrade done and to which version) 2. Is the scenario described here done after upgrade or before upgrade? or maybe part of it before upgrade and part of it after? 3. Around which hour are the relevant parts in the attached engine.log? It contains info for a big time range...
1.from 3.0 (the last build that was released) to 3.1 si24.4 2.After upgrade 3.Don't remember the hours. You can search for cloneVmFromSnapshot, I guess.
http://gerrit.ovirt.org/#/c/12452/ note the the problem is wider than described - when adding vm from snapshot, the vmdevice information used from the active vm and not from the vm configuration from when the snapshot was taken
sf13.1 fixed.
3.2 has been released