Description of problem: To stabilize the Ceph cluster, we are tying to implement IO throttling for the RBD volumes including booting device imported at nova and Cinder volume attached to VM. One can set the throttling via Virsh command, e.g: [root@svl12-csl-b-nova2-002 ~]# virsh blkdeviotune instance-000004c0 vda --total_bytes_sec 104857600 --total_iops_sec 250 [root@svl12-csl-b-nova2-002 ~]# virsh blkdeviotune instance-000004c0 vda total_bytes_sec: 104857600 read_bytes_sec : 0 write_bytes_sec: 0 total_iops_sec : 250 read_iops_sec : 0 write_iops_sec : 0 But the settings does not retain through reboot/restart. Wondering if similar logic is available or can be implemented at the configuration layer which can sustain reboot/restart: if (device::disk::source protocol =='rbd') then set (for example): <iotune> <total_bytes_sec>104857600</total_bytes_sec> <total_iops_sec>250</total_iops_sec> </iotune> done Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
This bz seems to relate to OpenStack, not RHEV - moving. Sergey, perhaps one of your guys can take a look? Thanks!
Sorry for the late response, I hope that article answers your question http://ceph.com/planet/openstack-ceph-rbd-and-qos/. Please, let me know if it doesn't.
I think this can be closed now?