Description of problem: Given a disk UUID, I'm trying to find out which VM has it attached. disk = api.disks.get(id='...') I see no way to correlate the disk with the VM which actually has it attached. Shouldn't disk.get_vm() or disk.get_vms() return the VMs that has them attached? Version-Release number of selected component (if applicable): ovirt-engine-sdk-python (3.6.3.0)
This isn't current possible because the API server doesn't have any mechanism to find the virtual machines that have a disk attached. The only alternative is to iterate all the virtual machines and for each one check if it has that disk attached: # The id of the disk: disk_id = ... # The ids of the virtual machines that have the disk attached: vm_ids = [] # Iterate all the virtual machines and find those that have the # disk attached: vms = api.vms.list() for vm in vms: disks = vm.disks() for disk in disks: if disk.get_id() == disk_id: vm_ids.append(vm.get_id()) # Do something with the virtual machine identifiers: for vm_id in vm_ids: print(vm_id) This, as you can imagine, is very inefficient if you have many virtual machines. Nicolas, can you explain why you need that? We can maybe find an alternative way to achieve what you need. If we decide to implement this, then I suggest that we do so adding to the top level virtual machines collection support to search by disk id: GET /vms?search=disk.id=12345 When/if that is added the SDKs will automatically support it: api.vms.list(query="disk.id=12345")
Actually this comes as a consequence of bug 1314959 (https://bugzilla.redhat.com/show_bug.cgi?id=1314959). Resuming: We're migrating storage from glusterfs to iSCSI and all our disks are thin provisioned, and seems that currently the only way to keep thin provisioning on the target storage is: 1. Copying the disk to iSCSI 2. Move the disk to a file-based storage (NFS) 3. Move the disk back to iSCSI 4. Attach the new disk to the VM 5. Detach the old disk from the VM As you can see, this is only a temporary need for us until we migrate all machines, but as we have 300+ machines we want to do this automatically via a script launched by cron each night. Our problem is on step 1 (copying): disk2copy = api.disks.get(id='...') action = params.Action(storage_data=...) disk2copy.copy(action) This works, but at this point we lose tracking of the new disk (we don't know its ID), so my original idea (although not the best probably) was to take advantage of the fact that all our disks are attached, so any copied disk will be detached so finding a disk that has no VMs associated would provide the newly copied disk. I used a pretty similar approach to the one you provided: 1. attached = [] 2. Iterate over VMs and save their disks' IDs in attached 3. Iterate over templates and save their disks' IDs in attached 4. Detached (not associated to any VM) disks are: api.disks.list() - attached That works but as you might imagine when working with 300+ machines each "search" takes about 5-7 minutes. So that's rather a temporary script, but maybe you find useful have such a feature to know which disks are not attached to anything (for example, to find useless disks that might be deleted).
One thing you can try is to assign an unique alias to the copy of the disk, when you create it: copy_alias = "copy-of-%s" % disk2copy.get_id() action = params.Action( disk=params.Disk( alias=copy_alias, ), ... ) disk2copy.copy(action) Later, when you need to find that copy, you can search using the alias as the search criteria, within the top level disks collection: copy_disk = api.disks.list(query="alias=%s" % copy_alias) That should give you the data of the disk that has just been copied.
Fair enough, that should do it much more efficiently than my initial approach. Thank you!!
The relationship exists in the DB, should be too much of a hardship exposing it in some sensible way. I'm not slating this for 4.0 as comment 3 suggests a viable approach, and comment 4 seems to suggest it's acceptable, but it's a capability we should add some time in the future regardless.
Closing old RFEs. If relevant, please re-open and explain why. As always- patches are welcomed!