2. Who is the customer behind the request? Account: name (acct #) Ärztekammer Schleswig-Holstein / 1494095 TAM customer: no/yes no SRM customer: no/yes no Strategic: no/yes no 3. What is the nature and description of the request? Ability to control the I/O scheduling priority of creating a VM from a template. 4. Why does the customer need this? (List the business requirements here) Cust reports creating a VM from a template takes up a large amount of I/O resources on hypervisor which is causing a big performance degradation in running VMs. 5. How would the customer like to achieve this? (List the functional requirements here) Be able to classify the task as a lower priority than currently running VM threads. Akin to renice'ing a process. 6. For each functional requirement listed, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented. Test creating a VM template and see if VM performance is still being affected. 7. Is there already an existing RFE upstream or in Red Hat Bugzilla? No 8. Does the customer have any specific timeline dependencies and which release would they like to target (i.e. RHEL5, RHEL6)? No 9. Is the sales team involved in this request and do they have any additional input? No 10. List any affected packages or components. RHEV-H 11. Would the customer be able to assist in testing this functionality if implemented? Yes
I'm guessing the client is cloning the VM from the template, as creation of a thin image from the template should not create a lot of I/O. Clone operations (full copy of the data) should definitely be capped and have reduced priority. This cannot be done with ionice as it just doesn't really work (we already do that). I can think of 2 ways of doing this: 1. limit through qemu. This would require one to run a VM from the template (read only or temp snap mode) and clone. 2. limit through cgroups. I don't recall if they've fixed it so that you can limit on deadline as well and not just cfq. One additional thing that needs to be done is to try and offload the copy to the storage directly when relevant.
if we go with cgroups, hopefully this can be as 'lower priority' rather than capping.
Hi Vivek, Do we have a way today to prioritize I/O when using deadline?
(In reply to Ayal Baron from comment #3) > Hi Vivek, > > Do we have a way today to prioritize I/O when using deadline? Ayal, Deadline does not support IO priority only CFQ does. IO capping is supported though at block layer if that is useful in this context.
Ayal, Vivek, Just following up on this for the customer - is this definitely a 3.4.0 target? I take it this is not exactly a trivial feature?
(In reply to Jake Hunsaker from comment #5) > Ayal, Vivek, > > Just following up on this for the customer - is this definitely a 3.4.0 > target? I take it this is not exactly a trivial feature? this is not a 3.4 target as far as I know. Doron?
(In reply to Ayal Baron from comment #6) > (In reply to Jake Hunsaker from comment #5) > > Ayal, Vivek, > > > > Just following up on this for the customer - is this definitely a 3.4.0 > > target? I take it this is not exactly a trivial feature? > > this is not a 3.4 target as far as I know. > Doron? You are correct. I/O QoS is in our raodmap, but not in 3.4.
Here's few notes that I collected over the time: IIRC capping is doable (weighting is hard). It is possible to use cgroups to limit the io, and eventually use cgexec to run the command (or something fancier). https://www.kernel.org/doc/Documentation/cgroups/blkio-controller.txt ATM we cannot use cgroups on file-systems (yet) because of bug 1124026 It is possible to throttle in process (qemu-img) or in kernel. Here's a comparison: Throttle in qemu-img: - pros: works on all backends (easier to develop/modify/upgrade), no dependency on kernel - cons: needs adjustments Throttle in kernel: - pros: no need for adjustments - cons: requires implementation in kernel per backend[1] (probably slower development and harder to modify/upgrade) [1] block device, file, network, etc.
Closing old issues. If still relevant please provide the use case and re-open.
BZ<2>Jira Resync