Description of problem: When writing OVF disk to shared storage, vdsm uses: dd of=/path/to/image bs=1M Since we don't use conv=fsync, the data is written to the host page cache. If a host crash after we wrote OVF or storage become inaccessible, we may loose the OVF content, or have partly written (corrupted) OVF. When reading OVF from dism, vdsm uses: dd if=/path/to/disk bs1M count=1 This read the data from the page cache. If the OVF was modified on another host, we may get stale data form the host page cache. Version-Release number of selected component (if applicable): Since OVF was added in 3.5. How reproducible: Probably very hard to reproduce.
Can I have steps/scenario for testing this, please?
There is not way to reproduce in the system level, testing means checking that flows using OVF storage still work (e.g. detach, domain, attach domain, import vms).
As written in comment#2 verification of this issue is checking that flows using OVF storage still work (e.g. detach, domain, attach domain, import vms). All of these flows are tested in automation regression Tier1/2/3 which passed on ovirt-engine 4.3.5-1/vdsm-4.30.20-1.el7ev.x86_64.
This bugzilla is included in oVirt 4.3.5 release, published on July 30th 2019. Since the problem described in this bug report should be resolved in oVirt 4.3.5 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.