Bug 1132048 - [RFE] display information about unmanaged disks
Summary: [RFE] display information about unmanaged disks
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: RestAPI
Version: 3.4.0
Hardware: x86_64
OS: Linux
low
high
Target Milestone: ovirt-4.1.0-alpha
: ---
Assignee: Liron Aravot
QA Contact: Aharon Canan
URL:
Whiteboard:
Depends On:
Blocks: 1127644
TreeView+ depends on / blocked
 
Reported: 2014-08-20 14:36 UTC by Allon Mureinik
Modified: 2016-11-03 10:56 UTC (History)
18 users (show)

Fixed In Version:
Clone Of: 1127644
Environment:
Last Closed: 2016-11-03 10:56:17 UTC
oVirt Team: Storage
Embargoed:
ylavi: ovirt-4.1?
sherold: Triaged+
ylavi: requirements_defined?
rule-engine: planning_ack?
amureini: devel_ack?
rule-engine: testing_ack?


Attachments (Terms of Use)

Description Allon Mureinik 2014-08-20 14:36:24 UTC
+++ This bug was initially created as a clone of Bug #1127644 +++
RHEVM should provide a REST api to return the xml configuration for a running VM which was used during startup.

According to Liron Aravot's comment on bug 1127644, RHEVM already has such an API. It should be documented both in oVirt's wiki (http://wiki.ovirt.org) and in RHEV's official documentation.

Comment 1 Liron Aravot 2014-08-26 07:35:25 UTC
What we have currently is the API to get the configuration of a vm snapshot, snapshot can be taken even for the configuration only (not for disks) - which should be satisfying.

Comment 2 Linux engineering teams - Veritas 2014-08-27 08:16:01 UTC
The main purpose of this request is to know the mapping of disk source on the host and disk name appearing in the guest. In this way we would know which disk on the host is mapped to which scsi device in the guest. This information is available in the vdsm log in the xml printed during vm start, but it might be useful to provide a REST api for the same (similar to virsh dumpxml). This is especially useful for type 'file' disks which are emulated by qemu scsi layer. It is okay if this info is not 'active' but just depicts the xml used during start of a VM.

Comment 3 Linux engineering teams - Veritas 2014-08-27 08:18:50 UTC
An example would be /api/vm/<id>/xml would return

<domain type="kvm">
        <name>UB</name>
        <uuid>14f61174-4ad6-4928-affd-c3667b2b2d9d</uuid>
        <memory>1048576</memory>
        <currentMemory>1048576</currentMemory>
        <vcpu current="1">160</vcpu>
        <memtune>
                <min_guarantee>1048576</min_guarantee>
        </memtune>
        <devices>
                <channel type="unix">
                        <target name="com.redhat.rhevm.vdsm" type="virtio"/>
                        <source mode="bind" path="/var/lib/libvirt/qemu/channels/14f61174-4ad6-4928-affd-c3667b2b2d9d.com.redhat.rhevm.vdsm"/>
                </channel>
                <channel type="unix">
                        <target name="org.qemu.guest_agent.0" type="virtio"/>
                        <source mode="bind" path="/var/lib/libvirt/qemu/channels/14f61174-4ad6-4928-affd-c3667b2b2d9d.org.qemu.guest_agent.0"/>
                </channel>
                <input bus="ps2" type="mouse"/>
                <channel type="spicevmc">
                        <target name="com.redhat.spice.0" type="virtio"/>
                </channel>
...
                <disk device="disk" snapshot="no" type="file">
                        <address bus="0x00" domain="0x0000" function="0x0" slot="0x05" type="pci"/>
                        <source file="/rhev/data-center/e17ab737-779c-4902-becb-ef7c13abfab2/5644053a-4f70-4065-92d8-7ffee59ac05d/images/71e2dffe-9728-4f51-9b4a-be6d4d91e322/e7dbcdce-1e78-4590-9270-4f30466f9ae0"/>
                        <target bus="virtio" dev="vda"/>
                        <serial>71e2dffe-9728-4f51-9b4a-be6d4d91e322</serial>
                        <boot order="1"/>
                        <driver cache="none" error_policy="stop" io="threads" name="qemu" type="raw"/>
                </disk>
                <disk device="disk" snapshot="no" type="block">
                        <address bus="0x00" domain="0x0000" function="0x0" slot="0x06" type="pci"/>
                        <source dev="/rhev/data-center/e17ab737-779c-4902-becb-ef7c13abfab2/09fbd7e9-2047-496c-a175-be9fb8bc8b44/images/a1f9fac1-e08f-4e8f-9b15-99811db63d2a/5364fb24-6750-4c22-b255-ecd5dac4ed71"/>
                        <target bus="virtio" dev="vdb"/>
                        <serial>a1f9fac1-e08f-4e8f-9b15-99811db63d2a</serial>
                        <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw"/>
                </disk>
...
        <cpu match="exact">
                <model>Conroe</model>
                <topology cores="1" sockets="160" threads="1"/>
        </cpu>
</domain>

Comment 4 Allon Mureinik 2014-08-27 09:50:08 UTC
Michal, this seems to be more down your team's alley than mine.
Can you take a look please?

Comment 5 Michal Skrivanek 2014-08-27 09:59:56 UTC
how does it relate to the fact that we are adding the mapping of disks based on serial number? 
The xml doesn't guarantee the mapping, it's just a hint AFAIK and the guest OS doesn't have to honor it

(not disputing the VM XML API call, just the fact that in previous comment this is described as the purpose for this request)

Comment 6 Allon Mureinik 2014-08-27 10:03:10 UTC
Vertias guys - can you answer Michal's question please?

Comment 7 Liron Aravot 2014-08-27 10:06:43 UTC
Veritas guys, we are aware of that issue and we are handling that on - https://bugzilla.redhat.com/show_bug.cgi?id=1063597 .
seems to me like this one can be closed as duplicate if that was the purpose here.

thanks,
Liron

Comment 8 Linux engineering teams - Veritas 2014-09-09 03:40:29 UTC
Thank you for your response. I believe the GUID is applicable for virtio-scsi as well. We will try the encoding approach and revert back in case of any queries.

This incident can be closed as a duplicate if the mentioned incident is applicable for virtio-scsi as well.

Comment 9 Linux engineering teams - Veritas 2014-09-09 03:47:01 UTC
I suppose the 1063597 handles mapping of scsi/virtio disks from the host to the guest. This information should be available via REST api in the host. Is there any way to fetch the serial no of disks added via custom properties (hooks) from REST api ?

Comment 10 Liron Aravot 2014-09-17 06:11:40 UTC
RHEV will return the disk mapping information of disks managed by RHEV (add through the webadmin/rest/etc..).

If i understand correctly, you are referring to disks that are not managed by RHEV and that are attached to a guest manually by using the hooks, currently RHEV doesn't display info on such disks and therefore you won't be able to get the mapping info on those, but only on disks managed by RHEV.

was that your meaning?

thanks,
Liron

Comment 11 Linux engineering teams - Veritas 2014-09-17 06:25:05 UTC
Hi Liron,

Thank you for your response. Yes you are correct. We are unable to get the serial no of the disk added via hooks (and not managed by RHEV).

Best regards,
Avadhoot

Comment 12 Liron Aravot 2014-11-05 14:49:35 UTC
Thanks Avadhoot,
Currently that's not supported as RHEV doesn't display info on such disks and therefore you won't be able to get the mapping info on those, but only on disks managed by RHEV.

Changing this bug to a RFE to address that issue.

thanks,
Liron

Comment 14 Allon Mureinik 2015-03-30 09:09:38 UTC
Yaniv, what exactly is your question here?

Comment 17 Yaniv Lavi 2016-01-28 09:34:59 UTC
Why are you not using the management to add the disk?

Comment 18 Ram Pandiri 2016-02-22 18:17:22 UTC
Hi Yaniv Dary,

Does not Comment 2 give enough detail?

Ram

Comment 20 Yaniv Lavi 2016-03-06 11:15:21 UTC
(In reply to Ram Pandiri from comment #18)
> Hi Yaniv Dary,
> 
> Does not Comment 2 give enough detail?
> 
> Ram

I need to understand why you use hook and not the management to add disks to the VMs. This is not the supported path to add disks therefore they are unmanaged and we don't show info on them.

Comment 24 Yaniv Lavi 2016-11-03 10:56:17 UTC
Please reopen if you can provide the needed info.


Note You need to log in before you can comment on or make changes to this bug.