Bug 1337314 - Excessive cpu usage while deleting disks on block storage when "wipe after delete" selected
Summary: Excessive cpu usage while deleting disks on block storage when "wipe after de...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: vdsm
Classification: oVirt
Component: Core
Version: 4.18.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ovirt-3.6.7
: 4.17.29
Assignee: Nir Soffer
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-18 19:26 UTC by Nir Soffer
Modified: 2016-07-04 12:33 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-07-04 12:33:48 UTC
oVirt Team: Storage
Embargoed:
rule-engine: ovirt-3.6.z+
ylavi: planning_ack+
tnisan: devel_ack+
acanan: testing_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 57541 0 master MERGED blockSD: Fix busy loop when zeroing image volumes 2016-05-18 19:41:37 UTC
oVirt gerrit 57710 0 ovirt-3.6 MERGED blockSD: Fix busy loop when zeroing image volumes 2016-05-23 07:49:27 UTC

Description Nir Soffer 2016-05-18 19:26:18 UTC
Description of problem:

When deleting disks on block storage with "Wipe After Delete" selected, we can
see vdsm consuming 100% cpu while wiping the disks.
 
Version-Release number of selected component (if applicable):
seen on master, but looking at the code, this issue exists since
2012.

How reproducible:
Always

Steps to Reproduce:
1. Create 20-30g disk on block storage
2. Select "wipe after delete"
3. Delete the disk
4. Run top on the hypervisor

Actual results:
Vdsm consume 100% cpu until the wipe is finished

Expected results:
Normal cpu usage

Additional info:

The defective code was added in this commit:

commit 98c660e91d181dbeda7d4e81cd390460f706044a
Author: Eduardo Warszawski <ewarszaw>
Date:   Thu Oct 4 10:55:51 2012 +0200

    BZ#836161 - Rewrite of deleteImage().
    
    Volume operations should be done at the SD level to avoid
    retrieving static data multiple times from disk.
    Added lvm.lvDmDev() returning the dm-X for active LVs.
    Use this to get active LV size without issue a lvm command.
    
    Change-Id: I304ff5cd70186ffc9789cd1ac9337efa6c5ff695
    Signed-off-by: Eduardo <ewarszaw>
    Reviewed-on: http://gerrit.ovirt.org/8506
    Reviewed-by: Dan Kenigsberg <danken>
    Reviewed-by: Ayal Baron <abaron>
    Tested-by: Haim Ateya <hateya>

Comment 1 Red Hat Bugzilla Rules Engine 2016-05-18 19:28:31 UTC
Bug tickets must have version flags set prior to targeting them to a release. Please ask maintainer to set the correct version flags and only then set the target milestone.

Comment 2 Red Hat Bugzilla Rules Engine 2016-05-18 19:30:35 UTC
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.

Comment 3 Elad 2016-06-06 13:48:30 UTC
VDSM consumes 5-25% CPU while performing disk deletion of a preallocated disk resides on block storage and set to wipe after delete.


engine.log:

2016-06-06 09:21:44,301 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (org.ovirt.thread.pool-6-thread-2) [2f8b05] START, DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{runAsync='true', sto
ragePoolId='00000001-0001-0001-0001-000000000281', ignoreFailoverLimit='false', storageDomainId='5e1e1a63-705b-4255-9b40-de564280919c', imageGroupId='16572464-9d4f-4b35-8f55-7739f242808b', postZeros='true', forceDelete='false'}), log id: 
68d49c4c


vdsm.log

jsonrpc.Executor/6::INFO::2016-06-06 16:21:42,903::logUtils::48::dispatcher::(wrapper) Run and protect: deleteImage(sdUUID=u'5e1e1a63-705b-4255-9b40-de564280919c', spUUID=u'00000001-0001-0001-0001-000000000281', imgUUID=u'16572464-9d4f-4b
35-8f55-7739f242808b', postZero=u'true', force=u'false')


top:


  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                                                                                   
 4413 vdsm       0 -20 2807208  75792  11964 S  12.6  0.9   3:48.93 vdsm 





Verified using:
vdsm-4.17.30-1.el7ev.noarch
rhevm-3.6.6.2-0.1.el6.noarch

iSCSI storage
XtremIO as storage backend


Note You need to log in before you can comment on or make changes to this bug.