Created attachment 940459 [details] Engine and VDSM logs showing the LSM failure due to Disk Profile not getting updated Description of problem: Live Storage Migration fails when trying to return disk to its original Storage domain and Disk profile noting "Error while executing action: Cannot move Virtual Machine Disk. Disk Profile doesn't match provided Storage Domain" Version-Release number of selected component (if applicable): 3.5 vt3.1 How reproducible: 100% Steps to Reproduce: 1. Live migrate a disk from a running VM from Storage Domain 1 to Storage Domain 2 (using the new Domain's Disk profile) 2. Once the migration has completed, try and migrate the disk back to Storage Domain 1 Actual results: An error is thrown immediately noting: "Error while executing action: Cannot move Virtual Machine Disk. Disk Profile doesn't match provided Storage Domain" Expected results: The migration should be successful Additional info: I've noticed that the Disk Profile isn't correctly switched as part of the Live Storage migration (or Move), if you manually change the Disk Profile under the Edit Disk screen, the migration will go through successfully. See attached logs
Gilad, can you please have a look? Seems like the disk profiles feature introduced a regression here in the flow
Thanks guys. Not sure it's a blocker though, since in order to get the bug, you ought to create new disk profile.
(In reply to Gilad Chaplik from comment #2) > Thanks guys. > > Not sure it's a blocker though, since in order to get the bug, you ought to > create new disk profile. This means that if you use one of 3.5's top new features (= disk profiles) you lose the ability to use one of pre-existing top features (=LSM). Sounds like a blocker to me, but ultimately, it's a PM decision.
*** Bug 1147274 has been marked as a duplicate of this bug. ***
Gilad, I see the patch was merged to 3.5. Can this BZ be moved to MODIFIED, or are we pending an additional patch?
thanks Allon, was afk, moving to MODIFIED
this bug status was moved to MODIFIED before engine vt5 was built, hence moving to on_qa, if this was mistake and the fix isn't in, please contact rhev-integ
I saw that after sending "move" via WEBUI to get live storage migration started, disk enters in to the locked state and nothing happens anymore with it over 2 days long. Logs and print-screen attached. Components on which I've tested are as follows: rhevm-3.5.0-0.18.beta.el6ev.noarch qemu-kvm-rhev-0.12.1.2-2.448.el6.x86_64 libvirt-0.10.2-46.el6_6.1.x86_64 vdsm-4.16.7.2-1.el6ev.x86_64 ovirt-hosted-engine-ha-1.2.4-1.el6ev.noarch ovirt-hosted-engine-setup-1.2.1-2.el6ev.noarch sanlock-2.8-1.el6.x86_64 ovirt-host-deploy-1.3.0-1.el6ev.noarch
Created attachment 953152 [details] all logs from engine and hosts
Created attachment 953153 [details] screenshot
Nikolai, Your host doesn't support live snapshot-ing, there's a bug to block the live move in this case (bug 1105846). moving back to ON_QA - should be checked on hosts that support it.
verified in ovirt-engine-3.5.0-0.22
This bug should not be dependant on BZ#1105846. As in Description it is stated that live migration of disk succeeded the first time -> It is supported. Thus this should be verified on hosts that support live migration (as was done in verification process). Blocking live migration when it is not supported by host is not relevant to this one.
rhev 3.5.0 was released. closing.