Description of problem: Now we have more features that are configure and view in VM devices Like: Memory device which can be hot unplug only from UI We need to add the VM device under VM resource in order to support configuration via REST.
Note we need unplug, not necessarily vm devices "content"
Can you please elaborate on what is needed? Some examples of what is the expected behaviour would be very helpful to understand what you need. Meanwhile, as it is not clear what is needed I am reducing the severity. Feel free to increase it again once the need is clearly explained.
Hi Juan, We would like to have the same functionality as we have in the UI: 1. Get the info/content of all device types 2. In case on memory device we will need action of "Hot unplug"
Israel, I think we already have the capability to unplug memory, it is just a matter of updating the VM with a value for "memory" lower than the actual value. Isn't that enough for your use case? What else do you need? Michal, do we need "unplug" also for other kinds of devices? Which ones?
No we don't support for it at a moment, it's done only from VM devices
Agreed with Juan. It can be solved by updating the size. It's not exactly the same, but close enough in behavior We may want to add a resource with exact VM configuration, e.g. Ahow all devices, or libvirt xml.
(In reply to Michal Skrivanek from comment #6) > Agreed with Juan. It can be solved by updating the size. > It's not exactly the same, but close enough in behavior > > We may want to add a resource with exact VM configuration, e.g. Ahow all > devices, or libvirt xml. So it will be like we plan to do it in the UI, The hotplug and hot unplug will be done with update vm, and we will verify it with libvirt xml?
OK. So just for clarification: 1. If we reduce the memory size in the REST-API: a) That does work and not throw an error? b) the unplug operation is triggered. So e.g. I am having a VM with 256MBvDIMMs. Now I have an API call, reducing the memory size by 1 byte, the result would be that the VM has 256MB less memory as one vDIMM get's unplugged. In the UI on the other side you do: 1. Move to vDevices Tab 2. Select a DIMM => Action Remove. If that is the case, this looks pretty inconsistent to me. Another reason for changing the UI (and maybe the REST-API) in terms of configuring the VM with vDIMMs from the start, similar to vCPU. (meaning the customer enters amount of memory, and that is translated to a "reasonable" size of vDIMMs and the needed amount of vDIMMs. In addition that approach would be similar/equal to the CPU hot-(un-)plug, making the UI more consistent in the DR behaviours. In case the VM is running, only the amount of vDIMMs can be changed online (similar to the amount of Sockets in the vCPU section). Thoughts?
(In reply to Martin Tessun from comment #8) > OK. So just for clarification: > > 1. If we reduce the memory size in the REST-API: > a) That does work and not throw an error? > b) the unplug operation is triggered. It will just store it to the next_run snapshot. The hot unplug is not working from the REST at all now, so we need to design it. > > So e.g. I am having a VM with 256MBvDIMMs. Now I have an API call, reducing > the memory size by 1 byte, the result would be that the VM has 256MB less > memory as one vDIMM get's unplugged. nono, not at all. This has not even been proposed AFAIK. > > In the UI on the other side you do: > 1. Move to vDevices Tab > 2. Select a DIMM => Action Remove. > > If that is the case, this looks pretty inconsistent to me. it is not. > Another reason for changing the UI (and maybe the REST-API) in terms of > configuring the VM with vDIMMs from the start, similar to vCPU. (meaning the > customer enters amount of memory, and that is translated to a "reasonable" > size of vDIMMs and the needed amount of vDIMMs. > In addition that approach would be similar/equal to the CPU hot-(un-)plug, > making the UI more consistent in the DR behaviours. That is not a small change. What you are proposing is to instead of having a memory as one field to have an option to manipulate dimms. And unlike for the CPU, the dimms can be all different (and during hotplug they indeed are due to the way how linux allocates online movable blocks). Also, the dimms which can be unplugged are only the ones which have been hotplugged before, so it is not true that if the user configures VM with 100G memory and we chop it up to, say, 16G chunks, he can unplug them in the running VM. Also, the way how we construct the VM would be affected by it (we would not be able to use the "memory" field of libvirt anymore). I would like to take a step back here and start from the beginning. We have two issues: 1: why is the unplug in UI in the devices subtab 2: why is it so un-intuitive for 1: It is not an easy task for the user to understand what can he unplug. He can unplug only dimms he have previously hot-plugged (e.g. if someone else have hotplugged a strange value like 211M, he would have to guess that). To make it even worse, during the first hotplug, we cut the memory to 2 dimms, the minimal one (128M for x86) and than the rest (reason is the online movable thing mentioned above). So we needed to present the user with a list of dimms he can unplug. Since a dimm is a davice, the list of devices was a natural place for this. for 2: Because the hotplug and hot unplug is on two different places. I would propose a middle ground which could (kinda) solve also the REST part and is doable in a near future: - in frontend: In the edit VM dialog on a running VM there would be a question mark which would tell the user that: "if you want to unplug memory, you can lower it by "10G", "15G", nothing else". This would help the user to undestand what can he do. Than, the unplug would work the same as hotplug - just edit the memory field to the correct value, and if on submit you decide to apply the changes immediately, the hot-unplug will happen. And we could issue an audit log message if the unplug value was incorrect. - in API: just normally edit the VM's memory size same as for hotplug. We will expect the user of the API to know what he is putting there. If not, he can check the audit log messages for hints. So, the hotplug and unplug would work the same both in API and UI and it would be easy to implement (meaning doable in 4.2 timeframe). Would this work? > > In case the VM is running, only the amount of vDIMMs can be changed online > (similar to the amount of Sockets in the vCPU section). > > Thoughts?
Idea would be to have the unplug implemented, always rounding down to the amount of DIMMs to unplug (to avoid memory pressure for the VMs). So in case we have 64GB DIMMS, the following would happen on unplug operations (assuming enough DIMMs are available for hotunplug) 64GB -> 1 DIMM unplugged 96GB -> 1 DIMM unplugged 127GB -> 1 DIMM unplugged 128GB -> 2 DIMMs unplugged. We still need to document this behaviour, but this seems the best and easiest way to me.
Would be nice to have that rounding also implemented for plug, not just unplug. Currently it isn't and it forces clients (like ManageIQ) to do their own very fragile calculations.
(In reply to Juan Hernández from comment #11) > Would be nice to have that rounding also implemented for plug, not just > unplug. Currently it isn't and it forces clients (like ManageIQ) to do their > own very fragile calculations. Tomas, I agree with Juan here. Any thoughts on this one?
(In reply to Martin Tessun from comment #12) > (In reply to Juan Hernández from comment #11) > > Would be nice to have that rounding also implemented for plug, not just > > unplug. Currently it isn't and it forces clients (like ManageIQ) to do their > > own very fragile calculations. > > Tomas, I agree with Juan here. Any thoughts on this one? I agree it would be nice, but it is a separate effort. The more pressing thing is to introduce some way of unplug. The hotplug is already there, just not too nice.
Verify with: Steps: 1. Create VM with 1 GB 2. Run VM 3. Hotplug 1 GB twice (VM will be with 3 GB) 4. Via REST hot unplug 1 GB Request: <vm> <memory_policy> <ballooning>true</ballooning> <guaranteed>2147483648</guaranteed> <max>4294967296</max> </memory_policy> <memory>2147483648</memory> </vm> Results: VM memory is update to 2 GB
This bugzilla is included in oVirt 4.2.0 release, published on Dec 20th 2017. Since the problem described in this bug report should be resolved in oVirt 4.2.0 release, published on Dec 20th 2017, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.