Description of problem: Via 'Custom Properties', a VM can be edited and the 'viodiskcache' property set. If this is set to either 'writeback' or 'writethrough', this will only apply to file-based storage volumes. For block-based storage, if 'virtio' is used, the VM will fail to start, reporting "unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads". If 'virtio-scsi' is used, the VM will start but the block-based disk will resort to 'cache=none' and 'aio=native'. Any file-based volumes will in this case be set to 'cache=writeback' and 'aio=threads'. The reason for this is that 'aio=threads' needs to be set for block-based storage and there is no way to set this in the RHEV Admin Portal. Version-Release number of selected component (if applicable): RHEV 3.x How reproducible: Every time. Steps to Reproduce: 1. Create VM with a block-based 'virtio' or 'virtio-scsi' disk. 2. Set 'viodiskcache' to 'writeback'. 3. Start VM. Actual results: Expected results: Additional info:
Can you please open a bug on RHEL to block this one to support caching on io native?
This is targeted to 4.1.6 which was already released?
PM has confirmed that the content of the KCS referenced from the docs is accurate. The KCS is published, and is referenced in the following places: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html-single/virtual_machine_management_guide/#Virtual_Machine_Custom_Properties_settings_explained https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html-single/virtual_machine_management_guide/#Virtual_Machine_Run_Once_settings_explained I can't see any other pending updates, so if there are no further requirements for updating the documentation here, we will close this BZ. Gordon, can you take a look and confirm that this satisfies your original request?
Lucy, As long as the information in the KCS is valid, as has been confirmed, and as long as this is now referenced in the Docs, which it is, then you can close this bug. Thanks, GFW.
Thanks for confirming! Moving this one to CLOSED.