+++ This bug was initially created as a clone of Bug #1127644 +++
RHEVM should provide a REST api to return the xml configuration for a running VM which was used during startup.
According to Liron Aravot's comment on bug 1127644, RHEVM already has such an API. It should be documented both in oVirt's wiki (http://wiki.ovirt.org) and in RHEV's official documentation.
What we have currently is the API to get the configuration of a vm snapshot, snapshot can be taken even for the configuration only (not for disks) - which should be satisfying.
The main purpose of this request is to know the mapping of disk source on the host and disk name appearing in the guest. In this way we would know which disk on the host is mapped to which scsi device in the guest. This information is available in the vdsm log in the xml printed during vm start, but it might be useful to provide a REST api for the same (similar to virsh dumpxml). This is especially useful for type 'file' disks which are emulated by qemu scsi layer. It is okay if this info is not 'active' but just depicts the xml used during start of a VM.
An example would be /api/vm/<id>/xml would return
<target name="com.redhat.rhevm.vdsm" type="virtio"/>
<source mode="bind" path="/var/lib/libvirt/qemu/channels/14f61174-4ad6-4928-affd-c3667b2b2d9d.com.redhat.rhevm.vdsm"/>
<target name="org.qemu.guest_agent.0" type="virtio"/>
<source mode="bind" path="/var/lib/libvirt/qemu/channels/14f61174-4ad6-4928-affd-c3667b2b2d9d.org.qemu.guest_agent.0"/>
<input bus="ps2" type="mouse"/>
<target name="com.redhat.spice.0" type="virtio"/>
<disk device="disk" snapshot="no" type="file">
<address bus="0x00" domain="0x0000" function="0x0" slot="0x05" type="pci"/>
<target bus="virtio" dev="vda"/>
<driver cache="none" error_policy="stop" io="threads" name="qemu" type="raw"/>
<disk device="disk" snapshot="no" type="block">
<address bus="0x00" domain="0x0000" function="0x0" slot="0x06" type="pci"/>
<target bus="virtio" dev="vdb"/>
<driver cache="none" error_policy="stop" io="native" name="qemu" type="raw"/>
<topology cores="1" sockets="160" threads="1"/>
Michal, this seems to be more down your team's alley than mine.
Can you take a look please?
how does it relate to the fact that we are adding the mapping of disks based on serial number?
The xml doesn't guarantee the mapping, it's just a hint AFAIK and the guest OS doesn't have to honor it
(not disputing the VM XML API call, just the fact that in previous comment this is described as the purpose for this request)
Vertias guys - can you answer Michal's question please?
Veritas guys, we are aware of that issue and we are handling that on - https://bugzilla.redhat.com/show_bug.cgi?id=1063597 .
seems to me like this one can be closed as duplicate if that was the purpose here.
Thank you for your response. I believe the GUID is applicable for virtio-scsi as well. We will try the encoding approach and revert back in case of any queries.
This incident can be closed as a duplicate if the mentioned incident is applicable for virtio-scsi as well.
I suppose the 1063597 handles mapping of scsi/virtio disks from the host to the guest. This information should be available via REST api in the host. Is there any way to fetch the serial no of disks added via custom properties (hooks) from REST api ?
RHEV will return the disk mapping information of disks managed by RHEV (add through the webadmin/rest/etc..).
If i understand correctly, you are referring to disks that are not managed by RHEV and that are attached to a guest manually by using the hooks, currently RHEV doesn't display info on such disks and therefore you won't be able to get the mapping info on those, but only on disks managed by RHEV.
was that your meaning?
Thank you for your response. Yes you are correct. We are unable to get the serial no of the disk added via hooks (and not managed by RHEV).
Currently that's not supported as RHEV doesn't display info on such disks and therefore you won't be able to get the mapping info on those, but only on disks managed by RHEV.
Changing this bug to a RFE to address that issue.
Yaniv, what exactly is your question here?
Why are you not using the management to add the disk?
Hi Yaniv Dary,
Does not Comment 2 give enough detail?
(In reply to Ram Pandiri from comment #18)
> Hi Yaniv Dary,
> Does not Comment 2 give enough detail?
I need to understand why you use hook and not the management to add disks to the VMs. This is not the supported path to add disks therefore they are unmanaged and we don't show info on them.
Please reopen if you can provide the needed info.