Bug 1041653 - [RFE][cinder]: when deleting volume in lvm, dd disk i/o performance issue
Summary: [RFE][cinder]: when deleting volume in lvm, dd disk i/o performance issue
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: unspecified
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: ---
: 7.0 (Kilo)
Assignee: Eric Harney
QA Contact: nlevinki
URL: https://blueprints.launchpad.net/cind...
Whiteboard: upstream_milestone_next upstream_stat...
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-12-12 18:30 UTC by RHOS Integration
Modified: 2016-04-26 23:52 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-03-21 20:23:11 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 74810 0 None None None Never

Description RHOS Integration 2013-12-12 18:30:19 UTC
Cloned from launchpad blueprint https://blueprints.launchpad.net/cinder/+spec/when-deleting-volume-dd-performance.

Description:

I had in trouble in cinder volume deleting.
I am developing for supporting big data storage such as hadoop in lvm .

it use as a full disk io for deleting of cinder lvm volume because of dd
The high disk I/O affects the other hadoop instance on same host. 

So when i delete the volume, i added disk io schduler, ionice

Used in the following way.

     def _copy_volume(self, srcstr, deststr, size_in_g):
-        self._execute('dd', 'if=%s' % srcstr, 'of=%s' % deststr,
+        self._execute('ionice', '-c3', 'dd', 'iflag=direct', 'oflag=direct',
+                      'if=%s' % srcstr, 'of=%s' % deststr,
                       'count=%d' % (size_in_g * 1024), 'bs=1M',
                       run_as_root=True)

Specification URL (additional information):

None

Comment 3 Sean Cohen 2015-01-26 03:14:28 UTC
Code was merged upstream
Tracking for Kilo feature freeze
Sean

Comment 4 Sean Cohen 2015-03-16 13:57:26 UTC
Did not made it into Kilo, pushing to rhos‑8.0 review,
Sean

Comment 5 Eric Harney 2015-03-16 14:07:34 UTC
Was implemented in Kilo, upstream blueprint just isn't in sync yet.

Comment 6 Sean Cohen 2015-03-16 14:13:19 UTC
(In reply to Eric Harney from comment #5)
> Was implemented in Kilo, upstream blueprint just isn't in sync yet.

Although it has been merged in 2014-04-28, It was not flagged as implemented yet.
Might get pushed to rhos-8.0,
Sean

Comment 7 Eric Harney 2016-03-21 18:24:17 UTC
This has existed since Icehouse.  Set volume_clear_ionice to "-c3" or similar to test.


Note You need to log in before you can comment on or make changes to this bug.