Bug 1041653

Summary: [RFE][cinder]: when deleting volume in lvm, dd disk i/o performance issue
Product: Red Hat OpenStack Reporter: RHOS Integration <rhos-integ>
Component: openstack-cinderAssignee: Eric Harney <eharney>
Status: CLOSED CURRENTRELEASE QA Contact: nlevinki <nlevinki>
Severity: low Docs Contact:
Priority: medium    
Version: unspecifiedCC: eharney, markmc, scohen, yeylon
Target Milestone: ---Keywords: FutureFeature, Triaged, ZStream
Target Release: 7.0 (Kilo)   
Hardware: Unspecified   
OS: Unspecified   
URL: https://blueprints.launchpad.net/cinder/+spec/when-deleting-volume-dd-performance
Whiteboard: upstream_milestone_next upstream_status_started upstream_definition_obsolete
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-03-21 20:23:11 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description RHOS Integration 2013-12-12 18:30:19 UTC
Cloned from launchpad blueprint https://blueprints.launchpad.net/cinder/+spec/when-deleting-volume-dd-performance.

Description:

I had in trouble in cinder volume deleting.
I am developing for supporting big data storage such as hadoop in lvm .

it use as a full disk io for deleting of cinder lvm volume because of dd
The high disk I/O affects the other hadoop instance on same host. 

So when i delete the volume, i added disk io schduler, ionice

Used in the following way.

     def _copy_volume(self, srcstr, deststr, size_in_g):
-        self._execute('dd', 'if=%s' % srcstr, 'of=%s' % deststr,
+        self._execute('ionice', '-c3', 'dd', 'iflag=direct', 'oflag=direct',
+                      'if=%s' % srcstr, 'of=%s' % deststr,
                       'count=%d' % (size_in_g * 1024), 'bs=1M',
                       run_as_root=True)

Specification URL (additional information):

None

Comment 3 Sean Cohen 2015-01-26 03:14:28 UTC
Code was merged upstream
Tracking for Kilo feature freeze
Sean

Comment 4 Sean Cohen 2015-03-16 13:57:26 UTC
Did not made it into Kilo, pushing to rhos‑8.0 review,
Sean

Comment 5 Eric Harney 2015-03-16 14:07:34 UTC
Was implemented in Kilo, upstream blueprint just isn't in sync yet.

Comment 6 Sean Cohen 2015-03-16 14:13:19 UTC
(In reply to Eric Harney from comment #5)
> Was implemented in Kilo, upstream blueprint just isn't in sync yet.

Although it has been merged in 2014-04-28, It was not flagged as implemented yet.
Might get pushed to rhos-8.0,
Sean

Comment 7 Eric Harney 2016-03-21 18:24:17 UTC
This has existed since Icehouse.  Set volume_clear_ionice to "-c3" or similar to test.