Bug 1334545 - [RFE][cinder] Capacity derived storage QoS limits
Summary: [RFE][cinder] Capacity derived storage QoS limits
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: unspecified
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: Upstream M2
: 12.0 (Pike)
Assignee: Eric Harney
QA Contact: Avi Avraham
Don Domingo
URL: https://blueprints.launchpad.net/cind...
Whiteboard: upstream_milestone_none upstream_defi...
: 1328728 (view as bug list)
Depends On:
Blocks: 1442136 1470904
TreeView+ depends on / blocked
 
Reported: 2016-05-10 00:34 UTC by Sean Cohen
Modified: 2018-02-05 19:02 UTC (History)
14 users (show)

Fixed In Version: openstack-cinder-11.0.0-0.20170515040117.dc60ec4.el7ost
Doc Type: Enhancement
Doc Text:
You can now set QoS IOPS limits that scale per GB size of the volume with the options "total_iops_sec_per_gb", "read_iops_sec_per_gb", and "write_iops_sec_per_gb". For example, if you set the total_iops_sec_per_gb=1000 option, you will get 1000 IOPS for a 1GB volume, 2000 IOPS for a 2GB volume, and so on.
Clone Of:
: 1470904 (view as bug list)
Environment:
Last Closed: 2017-12-13 20:41:55 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 178262 0 None MERGED RBD Thin Provisioning stats 2020-10-12 19:26:49 UTC
OpenStack gerrit 447127 0 None MERGED Add IOPS limits that scale per-GB 2020-10-12 19:26:49 UTC
Red Hat Product Errata RHEA-2017:3462 0 normal SHIPPED_LIVE Red Hat OpenStack Platform 12.0 Enhancement Advisory 2018-02-16 01:43:25 UTC

Description Sean Cohen 2016-05-10 00:34:06 UTC
AWS EBS provides a deterministic number of IOPS based on the capacity of the provisioned volume with Provisioned IOPS. Similarly, the newly announced throughput optimized volumes provide deterministic throughput based on the capacity of the provisioned volume. Cinder should, in addition to current per volume maximums, be able to set lower qos limits based on the provisioned capacity.

Comment 1 Sean Cohen 2016-06-01 12:19:05 UTC
*** Bug 1328728 has been marked as a duplicate of this bug. ***

Comment 9 Avi Avraham 2017-08-23 11:17:13 UTC
I failed to see any change in writing or reading files while preforming manual tests of this feature so I have a few questions for this feature implantation
0) Is configuration (cinder.conf / nova.conf files) is needed for activate QoS in Cinder or Nova 
1) Is this feature applicable to all backends ? 
2) Is it applicable to local LVM ? 
3) When A QoS value is being change do a volume remove from server is needed ?
Thanks 
Avi

Comment 10 Avi Avraham 2017-09-05 16:48:05 UTC
Verified according to test plan RHELOSP-24186
The following packages tested on the setup:
puppet-cinder-11.3.0-0.20170805095005.74836f2.el7ost.noarch
openstack-cinder-11.0.0-0.20170807225447.7ec31dc.el7ost.noarch
python-cinderclient-3.1.0-0.20170802135939.99bb6f3.el7ost.noarch
python-cinder-11.0.0-0.20170807225447.7ec31dc.el7ost.noarch

*************************************************************************
The read iops results on 1GB volume and 5GB volume 

# rm -f /root/kuku/file512MB ; rsync --progress /vol1gb/file512MB /root/kuku/
file512MB
   536870912 100%    1.25MB/s    0:06:49 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  1308006.21 bytes/sec
total size is 536870912  speedup is 1.00
# rm -f /root/kuku/file512MB ; rsync --progress /vol5gb/file512MB /root/kuku/
file512MB
   536870912 100%    6.26MB/s    0:01:21 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  6508321.81 bytes/sec
total size is 536870912  speedup is 1.00
# rm -f /root/kuku/file512MB ; rsync --progress /vol_no_qos/file512MB /root/kuku/
file512MB
   536870912 100%   96.27MB/s    0:00:05 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  97624827.09 bytes/sec
total size is 536870912  speedup is 1.00
******************************************************************* 
The write iops results on 1GB volume and 5GB volume
rm -f /vol1gb/file512MB ;rsync --progress /root/file512MB /vol1gb/
file512MB
   536870912 100%    3.45MB/s    0:02:28 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  3591548.82 bytes/sec
total size is 536870912  speedup is 1.00
# rm -f /vol5gb/file512MB ;rsync --progress /root/file512MB /vol5gb/
file512MB
   536870912 100%   19.31MB/s    0:00:26 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  19524965.42 bytes/sec
total size is 536870912  speedup is 1.00
# rm -f /vol_no_qos/file512MB;rsync --progress /root/file512MB /vol_no_qos/
file512MB
   536870912 100%   91.93MB/s    0:00:05 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  97624827.09 bytes/sec
total size is 536870912  speedup is 1.00
*****************************************************************
The total iops results on 1GB volume and 5GB volume: 
# rm -f /vol1gb/file512MB ;rsync --progress /root/file512MB /vol1gb/
file512MB
   536870912 100%    7.05MB/s    0:01:12 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  7305259.17 bytes/sec
total size is 536870912  speedup is 1.00
# rm -f /vol5gb/file512MB ;rsync --progress /root/file512MB /vol5gb/
file512MB
   536870912 100%   36.41MB/s    0:00:14 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  37030106.83 bytes/sec
total size is 536870912  speedup is 1.00
# rm -f /vol_no_qos/file512MB;rsync --progress /root/file512MB /vol_no_qos/
file512MB
   536870912 100%   87.31MB/s    0:00:05 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  82605622.92 bytes/sec
total size is 536870912  speedup is 1.00
 
# rm -f /root/kuku/file512MB;rsync --progress /vol1gb/file512MB /root/kuku/
file512MB
   536870912 100%    1.25MB/s    0:06:50 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  1308006.21 bytes/sec
total size is 536870912  speedup is 1.00
# rm -f /root/kuku/file512MB;rsync --progress /vol5gb/file512MB /root/kuku/
file512MB
   536870912 100%    6.26MB/s    0:01:21 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  6508321.81 bytes/sec
total size is 536870912  speedup is 1.00

Comment 14 Mikey Ariel 2017-11-28 09:57:33 UTC
Hi Eric, I only edited the draft doc text that was already in this bug, so I apologize if I missed anything but I assumed that the text is technically accurate and therefore only required proofreading.

If you could please update the content of the doc text field to reflect the issue and the fix, following the doc text format if possible, this will be a great help and then I can tweak it to align with Errata conventions.

Comment 18 errata-xmlrpc 2017-12-13 20:41:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:3462


Note You need to log in before you can comment on or make changes to this bug.