Bug 1568285 - [RFE] Implement a "live-migration friendly" disk cache mode
Summary: [RFE] Implement a "live-migration friendly" disk cache mode
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.6
Hardware: Unspecified
OS: Unspecified
Target Milestone: rc
: ---
Assignee: Stefan Hajnoczi
QA Contact: CongLi
Depends On:
Blocks: 1672519 1660575 1660576
TreeView+ depends on / blocked
Reported: 2018-04-17 07:17 UTC by Sergio Lopez
Modified: 2020-02-10 07:21 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1653542 1660576 (view as bug list)
Last Closed: 2019-06-24 11:07:15 UTC
Target Upstream Version:

Attachments (Terms of Use)

Description Sergio Lopez 2018-04-17 07:17:05 UTC
Description of problem:

Recent benchmarks on various environments have demonstrated that enabling Host-side disk cache can be beneficial for certain kind of workloads, specially read-mostly operations on a small data set, typical of DBs backing web services.

While QEMU already provides various Host-side disk cache modes, like writeback and writethrough, all of them are incompatible with live-migration.

Ideally, we should be able to make both writeback and writethrough. Alternatively, a safe way to switch between cache modes would allow the upper layers to coordinate the operation like this:

 - (source) disable cache -> live-migrate -> (destination) enable cache

Comment 3 Stefan Hajnoczi 2018-04-18 10:11:27 UTC
This feature is tricky and gets discussed from time to time in the QEMU community.  I will raise it again upstream and see if there is willingness to accept it.  The solution will probably have limitations but I'd like to get something merged.

Linux internally only has a best-effort API called invalidate_mapping_pages() which does not guarantee that cached pages are dropped.  There are exceptions, like mmaped pages and in-progress Transparent Hugepages, which could lead to stale reads from the page cache on the destination host.

Comment 10 Stefan Hajnoczi 2018-12-20 09:39:38 UTC
I have backported the QEMU patches necessary for shared storage live migration without cache=none.

Steps for verification:
1. Launch the guest on the migration source:
  (src)# qemu-system-x86_64 ... -drive if=virtio,file=path/to/shared/test.img,format=raw,cache=writeback
2. Launch the guest on the migration destination:
  (dst)# qemu-system-x86_64 ... -drive if=virtio,file=path/to/shared/test.img,format=raw,cache=writeback,file.x-check-cache-dropped=on -incoming tcp::1234
3. Start the migration:
  (src)(qemu) migrate tcp:dst:1234

Expected results:
The live migration completes successfully.

The x-check-cache-dropped=on option validates that pages are not already in memory on the destination side after QEMU invalidates the cache.  If this test fails you will see an error message.

Comment 19 Danilo Cesar Lemes de Paula 2019-02-05 17:28:38 UTC
This was moved from RHEL-7 to RHEL-8.
Patch that was sent is marked as rejected. So moving this back to assigned until is resent.

Stefan, I believe you will agree with this, right?

Note You need to log in before you can comment on or make changes to this bug.