Description of problem: Relative to bug #1314959, I'm making a script to migrate disks of VMs in this way: 1. Copying the disk to iSCSI 2. Move the disk to a file-based storage (NFS) 3. Move the disk back to iSCSI 4. Attach the new disk to the VM 5. Detach the old disk from the VM For step 1, I do this: sd = api.storagedomains.get(name='iSCSI') action = params.Action(storage_domain=sd, disk=params.Disk(alias='temporary'), async=False) disk.copy(action) Copying is done synchronously (no prompt is shown until the disk has been entirely copied), which is especially important as I need to wait the disk to finish copying to get its new UUID. No problem so far. For step 2, I do the same, but instead of calling copy() y call move(): sd = api.storagedomains.get(name='NFS') action = params.Action(storage_domain=sd, async=False) disk.move(action) Even if explicitly set async=False in the Action, the call is asynchronous, the prompt is immediately shown after the call. The call to move() must be synchronous here because before moving it back to iSCSI I need it to finish moving to NFS first. In short: when calling move() the async=False attribute is not being honored (in copy() it behaves as expected). Version-Release number of selected component (if applicable): 3.6.3.0 Additional info: As a workaround, I was able to write a "poorman's" wait4unlock method like this: def wait4unlock(api, diskalias, timeout=60): while True: disk = api.disks.get(alias=diskalias) st = disk.get_status() if st.get_state() == 'ok': break else: sleep(timeout)
The "async=False" parameter tells the API to wait till all the activities reported by the backend have been completed. It is key to understand that *reported* doesn't mean all, as there are ongoing activities (like moving disks) that aren't reported by the backend in a way that the API can handle. See bug 1199011 for more details. This isn't going to be fixed soon. So your poorman's wait method is actually the recommended approach. You should use it for all the operations, not just moving the disk.
*** This bug has been marked as a duplicate of bug 1199011 ***