Bug 1337314
| Summary: | Excessive cpu usage while deleting disks on block storage when "wipe after delete" selected | ||
|---|---|---|---|
| Product: | [oVirt] vdsm | Reporter: | Nir Soffer <nsoffer> |
| Component: | Core | Assignee: | Nir Soffer <nsoffer> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Elad <ebenahar> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 4.18.0 | CC: | amureini, bugs, sbonazzo, tnisan, ylavi |
| Target Milestone: | ovirt-3.6.7 | Flags: | rule-engine:
ovirt-3.6.z+
ylavi: planning_ack+ tnisan: devel_ack+ acanan: testing_ack+ |
| Target Release: | 4.17.29 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2016-07-04 12:33:48 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Bug tickets must have version flags set prior to targeting them to a release. Please ask maintainer to set the correct version flags and only then set the target milestone. Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release. VDSM consumes 5-25% CPU while performing disk deletion of a preallocated disk resides on block storage and set to wipe after delete.
engine.log:
2016-06-06 09:21:44,301 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (org.ovirt.thread.pool-6-thread-2) [2f8b05] START, DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{runAsync='true', sto
ragePoolId='00000001-0001-0001-0001-000000000281', ignoreFailoverLimit='false', storageDomainId='5e1e1a63-705b-4255-9b40-de564280919c', imageGroupId='16572464-9d4f-4b35-8f55-7739f242808b', postZeros='true', forceDelete='false'}), log id:
68d49c4c
vdsm.log
jsonrpc.Executor/6::INFO::2016-06-06 16:21:42,903::logUtils::48::dispatcher::(wrapper) Run and protect: deleteImage(sdUUID=u'5e1e1a63-705b-4255-9b40-de564280919c', spUUID=u'00000001-0001-0001-0001-000000000281', imgUUID=u'16572464-9d4f-4b
35-8f55-7739f242808b', postZero=u'true', force=u'false')
top:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4413 vdsm 0 -20 2807208 75792 11964 S 12.6 0.9 3:48.93 vdsm
Verified using:
vdsm-4.17.30-1.el7ev.noarch
rhevm-3.6.6.2-0.1.el6.noarch
iSCSI storage
XtremIO as storage backend
|
Description of problem: When deleting disks on block storage with "Wipe After Delete" selected, we can see vdsm consuming 100% cpu while wiping the disks. Version-Release number of selected component (if applicable): seen on master, but looking at the code, this issue exists since 2012. How reproducible: Always Steps to Reproduce: 1. Create 20-30g disk on block storage 2. Select "wipe after delete" 3. Delete the disk 4. Run top on the hypervisor Actual results: Vdsm consume 100% cpu until the wipe is finished Expected results: Normal cpu usage Additional info: The defective code was added in this commit: commit 98c660e91d181dbeda7d4e81cd390460f706044a Author: Eduardo Warszawski <ewarszaw> Date: Thu Oct 4 10:55:51 2012 +0200 BZ#836161 - Rewrite of deleteImage(). Volume operations should be done at the SD level to avoid retrieving static data multiple times from disk. Added lvm.lvDmDev() returning the dm-X for active LVs. Use this to get active LV size without issue a lvm command. Change-Id: I304ff5cd70186ffc9789cd1ac9337efa6c5ff695 Signed-off-by: Eduardo <ewarszaw> Reviewed-on: http://gerrit.ovirt.org/8506 Reviewed-by: Dan Kenigsberg <danken> Reviewed-by: Ayal Baron <abaron> Tested-by: Haim Ateya <hateya>