+++ This bug is a downstream clone. The original bug is: +++ +++ bug 1494711 +++ ====================================================================== Description of problem: Live Storage Migration was performed against VMs with multiple disks. All volumes were moved successfully to the new storage domain, but in both cases the snapshot deletion for one disk was not even attempted, and the snapshot was left in a locked state. Version-Release number of selected component (if applicable): RHEV 4.1.6 RHVH 4,1,6 How reproducible: Not. Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: (Originally by Gordon Watson)
Benny, can you take a look please? (Originally by Allon Mureinik)
Eyal - this BZ was marked as MODIFIED since it was fixed by a patch to an upstream BZ 1484825 which was merged before this one was filed. Can you assist in manually adding it to the errtum, as there's no patch with this BZ link to drive the process? TIA, Allon
-------------------------------------- Tested with the following code: ---------------------------------------- rhevm-4.1.7.2-0.1.el7.noarch vdsm-4.19.33-1.el7ev.x86_64 Tested with the following scenario: Steps to Reproduce: 1. create vm with multiple disks 2. start live migrating all disks Actual results: all disks moved to the new storage domain and all lsm auto generated snapshots were deleted after lsm completed. Expected results: Moving to VERIFIED!
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:3138
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days