Description of problem: We can take a snapshot with memory in version X and try to restore this snapshot, including its memory, in later version. This most likely result in a failure, and even if it won't, the user will most probably encounter errors later on when trying to apply new functionality on this VM. Therefore, we should block it. Additional info:
You refer to cluster compatibility version? If so, we should probably also notify user about this when upgrading clusters. Will it then be possible to still restore vm with memory? E.g. by copying the snapshot to another (older) cluster, or by downgrading?
(In reply to Yedidyah Bar David from comment #1) > You refer to cluster compatibility version? If so, we should probably also > notify user about this when upgrading clusters. Yes, to the cluster compatibility version I agree that we should warn the user on cluster upgrade > > Will it then be possible to still restore vm with memory? E.g. by copying > the snapshot to another (older) cluster, or by downgrading? Well, the only way I see is to move the VM to another (older) cluster. There is no way to move only the particular snapshot or to downgrade the compatibility version of the cluster.
Noting that while I agree it's better to block early than fail later, I think we should try hard to make it work (instead, or at least in addition, later). I have no idea this will take, but if we manage to "fix" the vm to work, it might be possible to also fix the snapshots. I use this (snapshot with memory) quite a lot during development, and if I were a customer building a workflow depending on this feature, I'd find it pretty weird/bad if it not always worked.
also see https://bugzilla.redhat.com/show_bug.cgi?id=1298487#c7
@Yedidyah - you can update the ovirt to newer version but if you have running VMs (or VMs with memory snapshot which is pretty much the same...) you need to keep the VMs in cluster with the original compatibility version (since the HW in new compatibility version may be different and the running VM may hardly survive this). Setting target to 3.6.3 since this is something the users may face during update so it would be good to fix it soon.
Bug tickets must have version flags set prior to targeting them to a release. Please ask maintainer to set the correct version flags and only then set the target milestone.
(In reply to Red Hat Bugzilla Rules Engine from comment #6) > Bug tickets must have version flags set prior to targeting them to a > release. Please ask maintainer to set the correct version flags and only > then set the target milestone. done
Tested with upgrade from rhevm vt20.2 to rhevm vt20.2: Steps: 1. Have VM installed with rhel 7.2 2. Create live snapshot with ram. 3. Power off VM 4. Put Host in maintenance 5. Upgrade flow: 1. rhevm updated 2. Host updated 3. Cluster updated from 3.5 to 3.6 4. Data center updated from 3.5 to 3.6 5. Activate Host 6. Snapshot -> preview for this same VM opens the attached "Warning_to_user_when_restore_snapshot_after_upgrade1.png" warning, with Restore memory check box, not checked - Which is OK. But, if doing: 7. Snapshot -> Custom preview snapshot, restore snapshot with memory can be done, without getting the attached warning. Such warning (as attached) should be added for this flow as well.
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.
Created attachment 1129833 [details] Restore warning screen shot
Bugs moved pre-mature to ON_QA since they didn't have target release. Notice that only bugs with a set target release will move to ON_QA.
ah, automation moved this to modified sooner than it actually was which caused a confusion and the 53974 did not get backported. Moving back to post and targeting to 3.6.6
Tried to verify on rhevm 3.5 vt20.5 to rhevm 3.6.5-2. Followed the steps from comment #8 Still, Snapshot -> Custom preview snapshot, restore snapshot with memory can be done, without getting the attached warning. Seems the fix for that is missing on 3.6.5-2.
Verified on rhevm upgrade from version rhevm-3.5.8-0.1 to rhevm-3.6.6.2-0.1, by steps from comment #8 Found that after the upgrade, now the there is a warning for custom preview, when asking to restore memory as well: "Memory restore from different cluster version can cause failure" Also can be seen in the attached screenshot from 2016-05-16, that show both warnings for normal preview, and the custom preview. Checked after upgrade, both normal preview & custom preview with memory works fine.
Created attachment 1157984 [details] screenshot_2016-05-16