When is virsh vol-wipe running on server with more VPS, performance is very bad, because of too big press to disk scheduler. This patch adds very little sleep to the wipe loop. diff -r -c libvirt-0.9.6-a/src/storage/storage_driver.c libvirt-0.9.6/src/storage/storage_driver.c *** libvirt-0.9.6-a/src/storage/storage_driver.c 2011-09-05 09:54:49.000000000 +0200 --- libvirt-0.9.6/src/storage/storage_driver.c 2011-10-06 12:03:03.692829671 +0200 *************** *** 1781,1786 **** --- 1781,1788 ---- *bytes_wiped += written; remaining -= written; + + usleep(10); } if (fdatasync(fd) < 0) {
Stanislav, thanks for the patch. Would you mind submitting it to the upstream list for discussion? Thanks, Dave
Yes, i think, it is a good idea. :) Some additional comment: After a few seconds of running the wipe loop is the device buffer full, so it is unnecessary to create additional pressure on the disk scheduler. Purging it does not accelerate, but will only slow down system responsiveness.
If the problem is that we're saturating the kernels' buffer cache, then we probably ought to make disk wiping use Direct IO, as we did for save/restore & coredump's recently.
See the BYPASS_CACHE flags for dump and save APIs
Thank you for reporting this issue to the libvirt project. Unfortunately we have been unable to resolve this issue due to insufficient maintainer capacity and it will now be closed. This is not a reflection on the possible validity of the issue, merely the lack of resources to investigate and address it, for which we apologise. If you none the less feel the issue is still important, you may choose to report it again at the new project issue tracker https://gitlab.com/libvirt/libvirt/-/issues The project also welcomes contribution from anyone who believes they can provide a solution.