Description of problem: VDSM writes aggressive zeroingng thus with low utilized VMS and many migrated in parallel can couse storage saturation. Version-Release number of selected component (if applicable): virt-v2v-1.36.10-6.16.rhvpreview.el7ev.x86_64 libguestfs-1.36.10-6.16.rhvpreview.el7ev.x86_64 nbdkit-1.2.4-3.el7.x86_64 ovirt-imageio-daemon-1.4.2-0.el7ev.noarch How reproducible: Always Steps to Reproduce: 1.Create 10 VMS with 33 % utilization 2.Migrate the 10 VMS in paralel 3.Advise writes on the storage is high Actual results: writes on zeroing on the storage is high Expected results: writes on zeroing on the storage should be optiomazied Additional info: logs will be attached
Can you please explain, what is aggressive zeroing? And why is the zeroing occurs for that matter?
As synced with Nir, the issue is that the imageio actually writes zeroes to the storage thus currently may saturate the network when large amounts of VMS migrating in parallel. As understood fast zeroing as qemu is doing will enter soon so this may solve it.
(In reply to guy chen from comment #3) > As synced with Nir, the issue is that the imageio actually writes zeroes to > the storage thus currently may saturate the network when large amounts of > VMS migrating in parallel. > As understood fast zeroing as qemu is doing will enter soon so this may > solve it. OK, now I get it, please state that it's v2v migration(!) next time, VM migration is a totally different flow and has nothing to do with writing zeros, hence my confusion.
Guy, can you add data from virt-v2v runs supporting the claim that aggressive zero writes cause any issue? Otherwise we can close this bug. With imageio 1.4.3 we use proper apis to write zeros, so this should be resolved by this bug 1615144.
Nir, Don't close it, move it to on QA. depending https://bugzilla.redhat.com/show_bug.cgi?id=1615144
Based on comment 7, moving to ON_QA.
From load run on 19.8 with ovirt-imageio-daemon-1.4.3 version with V2V migration of 10 VMS 100GB to FC times where greatly improved following the upgrade with the zero code. Case 8 (disk 33% full) reduced from 50 minutes to 27, and case 8a (disk 66% full) from 75 minutes to 42 minutes.