Bug 1334545

Summary: [RFE][cinder] Capacity derived storage QoS limits
Product: Red Hat OpenStack Reporter: Sean Cohen <scohen>
Component: openstack-cinderAssignee: Eric Harney <eharney>
Status: CLOSED ERRATA QA Contact: Avi Avraham <aavraham>
Severity: unspecified Docs Contact: Don Domingo <ddomingo>
Priority: medium    
Version: unspecifiedCC: aavraham, acanan, asimonel, cschwede, egafford, eharney, flucifre, kbader, mariel, msufiyan, nlevinki, pgrist, srevivo, tshefi
Target Milestone: Upstream M2Keywords: FutureFeature, Triaged
Target Release: 12.0 (Pike)   
Hardware: Unspecified   
OS: Unspecified   
URL: https://blueprints.launchpad.net/cinder/+spec/capacity-based-qos
Whiteboard: upstream_milestone_none upstream_definition_discussion upstream_status_started
Fixed In Version: openstack-cinder-11.0.0-0.20170515040117.dc60ec4.el7ost Doc Type: Enhancement
Doc Text:
You can now set QoS IOPS limits that scale per GB size of the volume with the options "total_iops_sec_per_gb", "read_iops_sec_per_gb", and "write_iops_sec_per_gb". For example, if you set the total_iops_sec_per_gb=1000 option, you will get 1000 IOPS for a 1GB volume, 2000 IOPS for a 2GB volume, and so on.
Story Points: ---
Clone Of:
: 1470904 (view as bug list) Environment:
Last Closed: 2017-12-13 20:41:55 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1442136, 1470904    

Description Sean Cohen 2016-05-10 00:34:06 UTC
AWS EBS provides a deterministic number of IOPS based on the capacity of the provisioned volume with Provisioned IOPS. Similarly, the newly announced throughput optimized volumes provide deterministic throughput based on the capacity of the provisioned volume. Cinder should, in addition to current per volume maximums, be able to set lower qos limits based on the provisioned capacity.

Comment 1 Sean Cohen 2016-06-01 12:19:05 UTC
*** Bug 1328728 has been marked as a duplicate of this bug. ***

Comment 9 Avi Avraham 2017-08-23 11:17:13 UTC
I failed to see any change in writing or reading files while preforming manual tests of this feature so I have a few questions for this feature implantation
0) Is configuration (cinder.conf / nova.conf files) is needed for activate QoS in Cinder or Nova 
1) Is this feature applicable to all backends ? 
2) Is it applicable to local LVM ? 
3) When A QoS value is being change do a volume remove from server is needed ?
Thanks 
Avi

Comment 10 Avi Avraham 2017-09-05 16:48:05 UTC
Verified according to test plan RHELOSP-24186
The following packages tested on the setup:
puppet-cinder-11.3.0-0.20170805095005.74836f2.el7ost.noarch
openstack-cinder-11.0.0-0.20170807225447.7ec31dc.el7ost.noarch
python-cinderclient-3.1.0-0.20170802135939.99bb6f3.el7ost.noarch
python-cinder-11.0.0-0.20170807225447.7ec31dc.el7ost.noarch

*************************************************************************
The read iops results on 1GB volume and 5GB volume 

# rm -f /root/kuku/file512MB ; rsync --progress /vol1gb/file512MB /root/kuku/
file512MB
   536870912 100%    1.25MB/s    0:06:49 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  1308006.21 bytes/sec
total size is 536870912  speedup is 1.00
# rm -f /root/kuku/file512MB ; rsync --progress /vol5gb/file512MB /root/kuku/
file512MB
   536870912 100%    6.26MB/s    0:01:21 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  6508321.81 bytes/sec
total size is 536870912  speedup is 1.00
# rm -f /root/kuku/file512MB ; rsync --progress /vol_no_qos/file512MB /root/kuku/
file512MB
   536870912 100%   96.27MB/s    0:00:05 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  97624827.09 bytes/sec
total size is 536870912  speedup is 1.00
******************************************************************* 
The write iops results on 1GB volume and 5GB volume
rm -f /vol1gb/file512MB ;rsync --progress /root/file512MB /vol1gb/
file512MB
   536870912 100%    3.45MB/s    0:02:28 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  3591548.82 bytes/sec
total size is 536870912  speedup is 1.00
# rm -f /vol5gb/file512MB ;rsync --progress /root/file512MB /vol5gb/
file512MB
   536870912 100%   19.31MB/s    0:00:26 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  19524965.42 bytes/sec
total size is 536870912  speedup is 1.00
# rm -f /vol_no_qos/file512MB;rsync --progress /root/file512MB /vol_no_qos/
file512MB
   536870912 100%   91.93MB/s    0:00:05 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  97624827.09 bytes/sec
total size is 536870912  speedup is 1.00
*****************************************************************
The total iops results on 1GB volume and 5GB volume: 
# rm -f /vol1gb/file512MB ;rsync --progress /root/file512MB /vol1gb/
file512MB
   536870912 100%    7.05MB/s    0:01:12 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  7305259.17 bytes/sec
total size is 536870912  speedup is 1.00
# rm -f /vol5gb/file512MB ;rsync --progress /root/file512MB /vol5gb/
file512MB
   536870912 100%   36.41MB/s    0:00:14 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  37030106.83 bytes/sec
total size is 536870912  speedup is 1.00
# rm -f /vol_no_qos/file512MB;rsync --progress /root/file512MB /vol_no_qos/
file512MB
   536870912 100%   87.31MB/s    0:00:05 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  82605622.92 bytes/sec
total size is 536870912  speedup is 1.00
 
# rm -f /root/kuku/file512MB;rsync --progress /vol1gb/file512MB /root/kuku/
file512MB
   536870912 100%    1.25MB/s    0:06:50 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  1308006.21 bytes/sec
total size is 536870912  speedup is 1.00
# rm -f /root/kuku/file512MB;rsync --progress /vol5gb/file512MB /root/kuku/
file512MB
   536870912 100%    6.26MB/s    0:01:21 (xfer#1, to-check=0/1)

sent 536936518 bytes  received 31 bytes  6508321.81 bytes/sec
total size is 536870912  speedup is 1.00

Comment 14 Mikey Ariel 2017-11-28 09:57:33 UTC
Hi Eric, I only edited the draft doc text that was already in this bug, so I apologize if I missed anything but I assumed that the text is technically accurate and therefore only required proofreading.

If you could please update the content of the doc text field to reflect the issue and the fix, following the doc text format if possible, this will be a great help and then I can tweak it to align with Errata conventions.

Comment 18 errata-xmlrpc 2017-12-13 20:41:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:3462