Verified in vdsm-4.13.2-0.14.el6ev.x86_64 (is36). Verification steps: I used local storages on the host. One "regular" under `/mnt/localstorage/`. And a second "flakey" one under `/mnt/errstorage/`, simulating I/O errors using `dmseup` utility: -~- # dd if=/dev/zero of=/tmp/virtualblock.img bs=4096 count=1M 1048576+0 records in 1048576+0 records out 4294967296 bytes (4,3 GB) copied, 50,7873 s, 84,6 MB/s # losetup /dev/loop7 /tmp/virtualblock.img # mkfs.ext4 /dev/loop7 mke2fs 1.41.12 (17-May-2010) Discarding device blocks: done ... ... ### following command creates a flakey device with random I/O errors # dmsetup create errdev0 0 8388608 flakey /dev/loop7 0 9 1 # mkdir /mnt/errstorage # chown -R vdsm:kvm /mnt/errstorage # mount /dev/mapper/errdev0 /mnt/errstorage/ -~- In RHEVM GUI, add both local storages (/mnt/localstorage as master SD). Create new VM with two disks - one "healthy" disk on the `localstorage` domain and a second "flakey" disk (1G) on the `errorstorage` domain. In RHEVM DB, update both disks to propagate errors to guest: psql: UPDATE base_disks SET propagate_errors = 'On'; Restart ovirt-engine service. In RHEVM GUI, install the guest OS *on the healthy disk* (I used Fedora 19). In the guest, mount the second flakey disk to `/mnt/errdisk/` and run some I/O operation on it. I used `dd`: # dd if=/dev/zero of=/mnt/errdisk/test bs=1000 mount=1M and after few seconds I got a splash of I/O errors "Buffer I/O error on device vdb, logical block ...". Results: The qemu process runs with correct parameter 'werror=enospc'. After the I/O errors, the guest is still running. Both, QEMU/VDSM and RHEVM, are also reporting the guest as running.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2014-0548.html