Bug 1272793 - QoS, Gluster: IOPS disk profile limit not working against glusterfs storage domain
QoS, Gluster: IOPS disk profile limit not working against glusterfs storage d...
Status: CLOSED DUPLICATE of bug 1297734
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev (Show other bugs)
7.2
x86_64 Linux
low Severity low
: rc
: 7.2
Assigned To: Jeff Cody
Virtualization Bugs
:
Depends On:
Blocks: Gluster-HC-2
  Show dependency treegraph
 
Reported: 2015-10-18 19:34 EDT by Paul Cuzner
Modified: 2016-03-17 12:52 EDT (History)
20 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-03-17 12:52:45 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
fio parameters (496 bytes, text/plain)
2015-10-18 19:34 EDT, Paul Cuzner
no flags Details
qemu process output showing the iops limiting parameters (3.24 KB, text/plain)
2016-03-16 22:31 EDT, Paul Cuzner
no flags Details

  None (edit)
Description Paul Cuzner 2015-10-18 19:34:57 EDT
Created attachment 1084195 [details]
fio parameters

Description of problem:
With a native glusterfs storage domain, I created a QoS profile (limiting to 400 IOPS) in the DC and added it to the glusterfs storage domain. After enabling in the storage domain I updated a vdisk I'm using for testing with fio, assigning the vdisk the iops-400 profile.

After running fio - I'm seeing 1800 IOPS reported by fio - not the 400 defined in the disk profile 

I've assigned this to vdsm - but it may be another component that needs to look at this issue.


Version-Release number of selected component (if applicable):
RHEVM 3.5.4
RHEL 7.1 hypervisors

How reproducible:
Every time

Steps to Reproduce:
1. Create a storage QoS in the DC - limiting to 400 IOPS
2. Update a glusterfs storage domain, adding the limited profile
3. update a disk in a vm that runs from the glusterfs storage domain with the limited iops profile
4. run fio to drive I/O to the device
5. observe the fio results

Actual results:
fio performance is not limited by the disk profile

Expected results:
fio job should be constrained by the disk limit imposed by the QoS setting


Additional info:
fio parms used during the run attached
Comment 1 Doron Fediuck 2015-10-19 03:34:29 EDT
oVirt is using the iotune API in libvirt for QoS:
https://libvirt.org/formatdomain.html#elementsDisks

Does it support Gluster?
Comment 2 Jiri Denemark 2015-10-19 10:33:55 EDT
It's implemented using "block_set_io_throttle" QMP command and whether this is supported for Gluster disks is more a question for QEMU developers.
Comment 3 Ademar Reis 2015-10-22 08:49:35 EDT
(In reply to Jiri Denemark from comment #2)
> It's implemented using "block_set_io_throttle" QMP command and whether this
> is supported for Gluster disks is more a question for QEMU developers.

Jeff, can you please check?
Comment 4 Jeff Cody 2015-10-22 09:04:06 EDT
It should be supported - the throttle groups are independent of the specific block driver, and attached to the BlockDriverState.
Comment 5 Jeff Cody 2015-10-22 22:07:30 EDT
As a follow-up:  I did a quick check with my local build of qemu for rhev 7.1, with a gluster server.  Varying the bps, and iops totals via block_set_io_throttle seemed to behave as expected, when running just qemu and feeding it the qmp commands via stdio.  I tested using dd with oflag=dsync, however, rather than fio.

I'll do another more in-depth test with fio, and report the results.
Comment 6 Paul Cuzner 2015-10-22 22:49:23 EDT
(In reply to Jeff Cody from comment #5)
> As a follow-up:  I did a quick check with my local build of qemu for rhev
> 7.1, with a gluster server.  Varying the bps, and iops totals via
> block_set_io_throttle seemed to behave as expected, when running just qemu
> and feeding it the qmp commands via stdio.  I tested using dd with
> oflag=dsync, however, rather than fio.
> 
> I'll do another more in-depth test with fio, and report the results.

Could this be something to do with the nature of the profile? dd is pure sequential - I'm using random rw.
Comment 8 Doron Fediuck 2015-11-03 07:08:08 EST
Moving to libvirt for further investigation.
Comment 10 Peter Krempa 2015-11-03 07:24:54 EST
(In reply to Doron Fediuck from comment #8)
> Moving to libvirt for further investigation.

Moving to qemu, see comment 2.
Comment 11 Karen Noel 2015-11-04 06:29:49 EST
(In reply to Jiri Denemark from comment #2)
> It's implemented using "block_set_io_throttle" QMP command and whether this
> is supported for Gluster disks is more a question for QEMU developers.

Paul, Can you somehow show that the QMP command is making it to QEMU? 

Jeff, Have you run the test with fio? Did you test with 7.1 or 7.2? The original report was 7.1. Thanks!
Comment 12 Jeff Cody 2015-11-04 09:37:53 EST
(In reply to Karen Noel from comment #11)
> (In reply to Jiri Denemark from comment #2)
> > It's implemented using "block_set_io_throttle" QMP command and whether this
> > is supported for Gluster disks is more a question for QEMU developers.
> 
> Paul, Can you somehow show that the QMP command is making it to QEMU? 
> 
> Jeff, Have you run the test with fio? Did you test with 7.1 or 7.2? The
> original report was 7.1. Thanks!

I'm running the test with fio now - I needed to add additional space to my gluster test server (I only had ~200M, I want to run fio with ~4GB).

I tested against 7.1.  I'll update the BZ with my fio results shortly.
Comment 13 Jeff Cody 2015-11-06 01:21:54 EST
Here is the version of qemu I used for my testing:

qemu version:
qemu-kvm-rhev-2.1.2-23.el7_1_1.10


qemu commandline:
qemu-system-x86_64 -enable-kvm -drive file=F18Test.qcow2,if=virtio,boot=on,node-name="test",aio=native,cache=none -chardev socket,path=/tmp/qga.sock,server,nowait,id=qga0 -device virtio-serial -device virtserialport,chardev=qga0,name=org.qemu.guest_agent.0 -m 1024 -boot c -qmp stdio -netdev tap,id=br0,vhost=on,ifname=tap1,script=no,downscript=no  -device virtio-net-pci,mac=02:12:34:56:78:9c,netdev=br0 -drive file=gluster://192.168.15.99/gv0/test.qcow2,if=virtio,aio=native,cache=none


fio commandline:
# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=64M --readwrite=randrw --rwmixread=75


Test run, no throttling:
  read : io=49000KB, bw=804450 B/s, iops=196 , runt= 62373msec
  write: io=16536KB, bw=271477 B/s, iops=66 , runt= 62373msec

Throttle via QMP:
{"execute": "block_set_io_throttle", "arguments": {"device": "virtio1", "bps": 0, "bps_rd": 0, "bps_wr": 0, "iops": 0, "iops_rd": 15, "iops_wr": 15} }

Test run, with throttling:
  read : io=49204KB, bw=61453 B/s, iops=15 , runt=819888msec
  write: io=16332KB, bw=20397 B/s, iops=4 , runt=819888msec
Comment 14 Jeff Cody 2015-11-11 09:02:02 EST
Peter, Paul:
Are there any logs that show the QEMU commands issued?  As described in comment #13, it looks to me as if it should work from QEMU.  I'd like to verify that 1., the command is being sent to QEMU, and 2., what the actual qapi command is that is sent.  Thanks!
Comment 15 Paul Cuzner 2015-11-12 00:46:35 EST
(In reply to Jeff Cody from comment #14)
> Peter, Paul:
> Are there any logs that show the QEMU commands issued?  As described in
> comment #13, it looks to me as if it should work from QEMU.  I'd like to
> verify that 1., the command is being sent to QEMU, and 2., what the actual
> qapi command is that is sent.  Thanks!

I'll take a look at the vdsm logs tomorrow.
Comment 17 Paul Cuzner 2016-02-16 19:52:33 EST
Just picking up on this with the latest rhev 3.6 beta.

Several observations

1) When the default disk profile has iops limitation imposed, I can confirm that the iops limit (for example), is visible on the qemu invocation (e.g. aio=threads,iops=100)

2) I ran an fio run and confirmed that the iops limitation set by the profile was being adhered to - which is great.

3) In the UI when I create a new QoS, and then add a disk profile to a storage domain that uses it I can't seem to apply it to a vdisk. When I select a vm, and edit one of the disks, I can select the new disk profile to assign from the pulldown and click 'ok'...BUT, when I return to the disk properties the original disk profile setting is still in place.
Comment 18 Ademar Reis 2016-03-15 13:22:10 EDT
(In reply to Jeff Cody from comment #14)
> Peter, Paul:
> Are there any logs that show the QEMU commands issued?  As described in
> comment #13, it looks to me as if it should work from QEMU.  I'd like to
> verify that 1., the command is being sent to QEMU, and 2., what the actual
> qapi command is that is sent.  Thanks!

AFAICS, we still don't have these logs... Paul?
Comment 19 Paul Cuzner 2016-03-16 00:35:44 EDT
I'll get the info to you as soon as I can. My test environment has been hosed by network changes - I need that to be resolved before I can get into this.

Anecdotally, what I'm seeing since 3.6GA is when the limit is placed in the unlimited profile - I see the additional parameters on the qemu process (e.g. aio=threads,iops=200). I'll confirm as soon as I can

However, if I try to change the settings to introduce a different profile it doesn't work - this is covered by BZ 1297734
Comment 20 Ademar Reis 2016-03-16 15:32:33 EDT
(In reply to Jeff Cody from comment #13)
> Here is the version of qemu I used for my testing:
> 
> qemu version:
> qemu-kvm-rhev-2.1.2-23.el7_1_1.10
> 
> 
> qemu commandline:
> qemu-system-x86_64 -enable-kvm -drive
> file=F18Test.qcow2,if=virtio,boot=on,node-name="test",aio=native,cache=none
> -chardev socket,path=/tmp/qga.sock,server,nowait,id=qga0 -device
> virtio-serial -device
> virtserialport,chardev=qga0,name=org.qemu.guest_agent.0 -m 1024 -boot c -qmp
> stdio -netdev tap,id=br0,vhost=on,ifname=tap1,script=no,downscript=no 
> -device virtio-net-pci,mac=02:12:34:56:78:9c,netdev=br0 -drive
> file=gluster://192.168.15.99/gv0/test.qcow2,if=virtio,aio=native,cache=none
> 
> 
> fio commandline:
> # fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1
> --name=test --filename=test --bs=4k --iodepth=64 --size=64M
> --readwrite=randrw --rwmixread=75
> 
> 
> Test run, no throttling:
>   read : io=49000KB, bw=804450 B/s, iops=196 , runt= 62373msec
>   write: io=16536KB, bw=271477 B/s, iops=66 , runt= 62373msec
> 
> Throttle via QMP:
> {"execute": "block_set_io_throttle", "arguments": {"device": "virtio1",
> "bps": 0, "bps_rd": 0, "bps_wr": 0, "iops": 0, "iops_rd": 15, "iops_wr": 15}
> }
> 
> Test run, with throttling:
>   read : io=49204KB, bw=61453 B/s, iops=15 , runt=819888msec
>   write: io=16332KB, bw=20397 B/s, iops=4 , runt=819888msec

Given that it works when testing only QEMU, we'll wait for the results of Paul's tests. In the meanwhile, I'm marking it condnak(reproducer).

Maybe it's actually a consequence of Bug 1297734.
Comment 21 Paul Cuzner 2016-03-16 22:30:55 EDT
OK. I have rerun the tests and can see on GA release of RHEV3.6 that the iops parameters are passed to the running process.

I can also confirm they're working well:) I tested with fio on unlimited and then limited to 100 iops, and fio results were 
Unlimited 
<snip>
read : io=359576KB, bw=2996.3KB/s, iops=749, runt=120009msec
</snip>

Limited
<snip>
read : io=48056KB, bw=409961B/s, iops=100, runt=120034msec
</snip>

With a limit in place the qemu process has the aio=threads,iops=100 parameters

So it looks like the issue now is solely BZ 1297734

Thanks for your patience!
Comment 22 Paul Cuzner 2016-03-16 22:31 EDT
Created attachment 1137276 [details]
qemu process output showing the iops limiting parameters
Comment 23 Jeff Cody 2016-03-17 12:52:45 EDT
Thanks Paul.  Based on comment #21, closing this as a dupe of BZ 1297734.

*** This bug has been marked as a duplicate of bug 1297734 ***

Note You need to log in before you can comment on or make changes to this bug.