Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1334545 - [RFE][cinder] Capacity derived storage QoS limits
[RFE][cinder] Capacity derived storage QoS limits
Status: CLOSED ERRATA
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder (Show other bugs)
unspecified
Unspecified Unspecified
medium Severity unspecified
: Upstream M2
: 12.0 (Pike)
Assigned To: Eric Harney
Avi Avraham
Don Domingo
https://blueprints.launchpad.net/cind...
upstream_milestone_none upstream_defi...
: FutureFeature, Triaged
: 1328728 (view as bug list)
Depends On:
Blocks: 1442136 1470904
  Show dependency treegraph
 
Reported: 2016-05-09 20:34 EDT by Sean Cohen
Modified: 2018-02-05 14:02 EST (History)
14 users (show)

See Also:
Fixed In Version: openstack-cinder-11.0.0-0.20170515040117.dc60ec4.el7ost
Doc Type: Enhancement
Doc Text:
You can now set QoS IOPS limits that scale per GB size of the volume with the options "total_iops_sec_per_gb", "read_iops_sec_per_gb", and "write_iops_sec_per_gb". For example, if you set the total_iops_sec_per_gb=1000 option, you will get 1000 IOPS for a 1GB volume, 2000 IOPS for a 2GB volume, and so on.
Story Points: ---
Clone Of:
: 1470904 (view as bug list)
Environment:
Last Closed: 2017-12-13 15:41:55 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
OpenStack gerrit 178262 None None None 2016-09-23 12:52 EDT
OpenStack gerrit 447127 None None None 2017-04-25 11:16 EDT
Red Hat Product Errata RHEA-2017:3462 normal SHIPPED_LIVE Red Hat OpenStack Platform 12.0 Enhancement Advisory 2018-02-15 20:43:25 EST

  None (edit)
Description Sean Cohen 2016-05-09 20:34:06 EDT
AWS EBS provides a deterministic number of IOPS based on the capacity of the provisioned volume with Provisioned IOPS. Similarly, the newly announced throughput optimized volumes provide deterministic throughput based on the capacity of the provisioned volume. Cinder should, in addition to current per volume maximums, be able to set lower qos limits based on the provisioned capacity.
Comment 1 Sean Cohen 2016-06-01 08:19:05 EDT
*** Bug 1328728 has been marked as a duplicate of this bug. ***
Comment 9 Avi Avraham 2017-08-23 07:17:13 EDT
I failed to see any change in writing or reading files while preforming manual tests of this feature so I have a few questions for this feature implantation
0) Is configuration (cinder.conf / nova.conf files) is needed for activate QoS in Cinder or Nova 
1) Is this feature applicable to all backends ? 
2) Is it applicable to local LVM ? 
3) When A QoS value is being change do a volume remove from server is needed ?
Thanks 
Avi
Comment 10 Avi Avraham 2017-09-05 12:48:05 EDT
Verified according to test plan RHELOSP-24186
The following packages tested on the setup:
puppet-cinder-11.3.0-0.20170805095005.74836f2.el7ost.noarch
openstack-cinder-11.0.0-0.20170807225447.7ec31dc.el7ost.noarch
python-cinderclient-3.1.0-0.20170802135939.99bb6f3.el7ost.noarch
python-cinder-11.0.0-0.20170807225447.7ec31dc.el7ost.noarch

*************************************************************************
The read iops results on 1GB volume and 5GB volume 

# rm -f /root/kuku/file512MB ; rsync --progress /vol1gb/file512MB /root/kuku/
file512MB
   536870912 100%    1.25MB/s    0:06:49 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  1308006.21 bytes/sec
total size is 536870912  speedup is 1.00
# rm -f /root/kuku/file512MB ; rsync --progress /vol5gb/file512MB /root/kuku/
file512MB
   536870912 100%    6.26MB/s    0:01:21 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  6508321.81 bytes/sec
total size is 536870912  speedup is 1.00
# rm -f /root/kuku/file512MB ; rsync --progress /vol_no_qos/file512MB /root/kuku/
file512MB
   536870912 100%   96.27MB/s    0:00:05 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  97624827.09 bytes/sec
total size is 536870912  speedup is 1.00
******************************************************************* 
The write iops results on 1GB volume and 5GB volume
rm -f /vol1gb/file512MB ;rsync --progress /root/file512MB /vol1gb/
file512MB
   536870912 100%    3.45MB/s    0:02:28 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  3591548.82 bytes/sec
total size is 536870912  speedup is 1.00
# rm -f /vol5gb/file512MB ;rsync --progress /root/file512MB /vol5gb/
file512MB
   536870912 100%   19.31MB/s    0:00:26 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  19524965.42 bytes/sec
total size is 536870912  speedup is 1.00
# rm -f /vol_no_qos/file512MB;rsync --progress /root/file512MB /vol_no_qos/
file512MB
   536870912 100%   91.93MB/s    0:00:05 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  97624827.09 bytes/sec
total size is 536870912  speedup is 1.00
*****************************************************************
The total iops results on 1GB volume and 5GB volume: 
# rm -f /vol1gb/file512MB ;rsync --progress /root/file512MB /vol1gb/
file512MB
   536870912 100%    7.05MB/s    0:01:12 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  7305259.17 bytes/sec
total size is 536870912  speedup is 1.00
# rm -f /vol5gb/file512MB ;rsync --progress /root/file512MB /vol5gb/
file512MB
   536870912 100%   36.41MB/s    0:00:14 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  37030106.83 bytes/sec
total size is 536870912  speedup is 1.00
# rm -f /vol_no_qos/file512MB;rsync --progress /root/file512MB /vol_no_qos/
file512MB
   536870912 100%   87.31MB/s    0:00:05 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  82605622.92 bytes/sec
total size is 536870912  speedup is 1.00
 
# rm -f /root/kuku/file512MB;rsync --progress /vol1gb/file512MB /root/kuku/
file512MB
   536870912 100%    1.25MB/s    0:06:50 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  1308006.21 bytes/sec
total size is 536870912  speedup is 1.00
# rm -f /root/kuku/file512MB;rsync --progress /vol5gb/file512MB /root/kuku/
file512MB
   536870912 100%    6.26MB/s    0:01:21 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  6508321.81 bytes/sec
total size is 536870912  speedup is 1.00
Comment 14 Mikey Ariel 2017-11-28 04:57:33 EST
Hi Eric, I only edited the draft doc text that was already in this bug, so I apologize if I missed anything but I assumed that the text is technically accurate and therefore only required proofreading.

If you could please update the content of the doc text field to reflect the issue and the fix, following the doc text format if possible, this will be a great help and then I can tweak it to align with Errata conventions.
Comment 18 errata-xmlrpc 2017-12-13 15:41:55 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:3462

Note You need to log in before you can comment on or make changes to this bug.