Bug 1729408 - RFE: storage: Ceph RBD image-meta support to set features like QoS I/O throttling
Summary: RFE: storage: Ceph RBD image-meta support to set features like QoS I/O thrott...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Virtualization Tools
Classification: Community
Component: libvirt
Version: unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Libvirt Maintainers
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-12 08:29 UTC by Jules
Modified: 2024-12-17 12:30 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2024-12-17 12:30:51 UTC
Embargoed:


Attachments (Terms of Use)

Description Jules 2019-07-12 08:29:48 UTC
Description of problem:
Ceph RBD has a image-meta namespace than can define different features for each image volume:
image-meta get image-spec key
image-meta list image-spec
image-meta remove image-spec key
image-meta set image-spec key value

Since Ceph Nautilus (14.2.x) there is a new feature u can set QoS limits for each volume on RBD side. So this could be used to have QoS support regardless of the underlying hypervisor if Ceph is used as storage.

Version-Release number of selected component (if applicable):
Ceph Nautilus 14.2.x

How reproducible:
always

Steps to Reproduce:
# For 1000 iops you would set:
rbd image-meta set qos_iops-limit 1000
rbd image-meta set qos_read_iops_limit 1000
rbd image-meta set qos_read_iops_limit 1000
e.g.

Actual results:
Works by setting it manually via CLI but not adopted in libvirt yet

Expected results:
Have support on libvirt side :-)


Additional info:
# There are even more tuneables on the RBD QOS settings ref pag: http://docs.ceph.com/docs/nautilus/rbd/rbd-config-ref/

Probably it would be even better to use custom option XML namespaces (get supported meta fields by using: image-meta list <VolumeName>) for this. Thus could bring us the flexible options in the case that new features arrive.

So for each disk image entry of type=network, device=disk we need something like this:

<disk type='network' device='disk'>
  <source protocol='rbd' name='libvirt-pool/new-libvirt-image'>
    <host name='{monitor-host}' port='6789'/>

    <rbd:image_meta_set>
    <rbd:option name='qos_iops-limit' value='1000'/>
    <rbd:option name='qos_read_iops_limit' value='1000'/>
    <rbd:option name='qos_read_iops_limit' value='1000'/>
    </rbd:image_meta_set>

  </source>
  <target dev='xvda' bus='xenbus'/>
</disk>

Comment 1 Jules 2019-07-13 16:57:14 UTC
Just seen that there is an similar logic already that applies to storage pool namespace:
https://github.com/libvirt/libvirt/commit/ab6ca812763b539e5380d8d6c4fa9da939125814

So this maybe can get extended, to allow defining conf parameters per disk xml.
The only difference would be that it needs to loop through each defined disk source name (in my example): libvirt-pool/new-libvirt-image to assign each image volume the limit option with their corresponding value.

Another note: From reading the Ceph RBD docs, "image-meta set" can be applied at runtime and take action immediately.
So an option to change these values on-the-fly via libvirt api would be a killer feature.

Comment 2 Daniel Berrangé 2024-12-17 12:30:51 UTC
Thank you for reporting this issue to the libvirt project. Unfortunately we have been unable to resolve this issue due to insufficient maintainer capacity and it will now be closed. This is not a reflection on the possible validity of the issue, merely the lack of resources to investigate and address it, for which we apologise. If you none the less feel the issue is still important, you may choose to report it again at the new project issue tracker https://gitlab.com/libvirt/libvirt/-/issues The project also welcomes contribution from anyone who believes they can provide a solution.


Note You need to log in before you can comment on or make changes to this bug.