Bug 1612841 - [v2v] Aggressive Writes of Zeros by imagio
Summary: [v2v] Aggressive Writes of Zeros by imagio
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-imageio
Classification: oVirt
Component: Daemon
Version: ---
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ovirt-4.2.6
: ---
Assignee: Nir Soffer
QA Contact: guy chen
URL:
Whiteboard:
Depends On: 1615144
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-08-06 11:57 UTC by guy chen
Modified: 2019-04-28 09:47 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-03 15:11:09 UTC
oVirt Team: Scale
Embargoed:
rule-engine: ovirt-4.2+


Attachments (Terms of Use)

Description guy chen 2018-08-06 11:57:36 UTC
Description of problem:
VDSM writes aggressive zeroingng thus with low utilized VMS and many migrated in parallel can couse storage saturation. 

Version-Release number of selected component (if applicable):
virt-v2v-1.36.10-6.16.rhvpreview.el7ev.x86_64
libguestfs-1.36.10-6.16.rhvpreview.el7ev.x86_64
nbdkit-1.2.4-3.el7.x86_64
ovirt-imageio-daemon-1.4.2-0.el7ev.noarch


How reproducible:
Always

Steps to Reproduce:
1.Create 10 VMS with 33 % utilization
2.Migrate the 10 VMS in paralel
3.Advise writes on the storage is high

Actual results:
writes on zeroing on the storage is high

Expected results:
writes on zeroing on the storage should be optiomazied


Additional info:
logs will be attached

Comment 1 Tal Nisan 2018-08-06 12:12:03 UTC
Can you please explain, what is aggressive zeroing? And why is the zeroing occurs for that matter?

Comment 2 guy chen 2018-08-06 12:43:15 UTC
As synced with Nir, the issue is that the imageio actually writes zeroes to the storage thus currently may saturate the network when large amounts of VMS migrating in parallel.
As understood fast zeroing as qemu is doing will enter soon so this may solve it.

Comment 3 guy chen 2018-08-06 12:43:37 UTC
As synced with Nir, the issue is that the imageio actually writes zeroes to the storage thus currently may saturate the network when large amounts of VMS migrating in parallel.
As understood fast zeroing as qemu is doing will enter soon so this may solve it.

Comment 4 Tal Nisan 2018-08-06 14:45:56 UTC
(In reply to guy chen from comment #3)
> As synced with Nir, the issue is that the imageio actually writes zeroes to
> the storage thus currently may saturate the network when large amounts of
> VMS migrating in parallel.
> As understood fast zeroing as qemu is doing will enter soon so this may
> solve it.

OK, now I get it, please state that it's v2v migration(!) next time, VM migration is a totally different flow and has nothing to do with writing zeros, hence my confusion.

Comment 6 Nir Soffer 2018-08-13 12:53:31 UTC
Guy, can you add data from virt-v2v runs supporting the claim that aggressive zero
writes cause any issue?

Otherwise we can close this bug. With imageio 1.4.3 we use proper apis to write
zeros, so this should be resolved by this bug 1615144.

Comment 7 Daniel Gur 2018-08-13 13:10:57 UTC
Nir, Don't close it, move it to on QA. 
depending https://bugzilla.redhat.com/show_bug.cgi?id=1615144

Comment 8 Nir Soffer 2018-08-13 13:13:25 UTC
Based on comment 7, moving to ON_QA.

Comment 9 guy chen 2018-08-28 05:47:56 UTC
From load run on 19.8 with ovirt-imageio-daemon-1.4.3 version with V2V migration of 10 VMS 100GB to FC times where greatly improved following the upgrade with the zero code.
Case 8 (disk 33% full) reduced from 50 minutes to 27, and case 8a  (disk 66% full) from 75 minutes to 42 minutes.


Note You need to log in before you can comment on or make changes to this bug.