Bug 1568285

Summary: [RFE] Implement a "live-migration friendly" disk cache mode
Product: Red Hat Enterprise Linux 7 Reporter: Sergio Lopez <slopezpa>
Component: qemu-kvm-rhevAssignee: Stefan Hajnoczi <stefanha>
Status: CLOSED WONTFIX QA Contact: CongLi <coli>
Severity: medium Docs Contact:
Priority: high    
Version: 7.6CC: areis, chayang, coli, ddepaula, fjin, jinzhao, juzhang, knoel, kwolf, michal.skrivanek, mkalinin, mtessun, rbalakri, rbarry, sfroemer, stefanha, virt-maint
Target Milestone: rcKeywords: FutureFeature
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1653542 1660576 (view as bug list) Environment:
Last Closed: 2019-06-24 11:07:15 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1660575, 1660576, 1672519    

Description Sergio Lopez 2018-04-17 07:17:05 UTC
Description of problem:

Recent benchmarks on various environments have demonstrated that enabling Host-side disk cache can be beneficial for certain kind of workloads, specially read-mostly operations on a small data set, typical of DBs backing web services.

While QEMU already provides various Host-side disk cache modes, like writeback and writethrough, all of them are incompatible with live-migration.

Ideally, we should be able to make both writeback and writethrough. Alternatively, a safe way to switch between cache modes would allow the upper layers to coordinate the operation like this:

 - (source) disable cache -> live-migrate -> (destination) enable cache

Comment 3 Stefan Hajnoczi 2018-04-18 10:11:27 UTC
This feature is tricky and gets discussed from time to time in the QEMU community.  I will raise it again upstream and see if there is willingness to accept it.  The solution will probably have limitations but I'd like to get something merged.

Linux internally only has a best-effort API called invalidate_mapping_pages() which does not guarantee that cached pages are dropped.  There are exceptions, like mmaped pages and in-progress Transparent Hugepages, which could lead to stale reads from the page cache on the destination host.

Comment 10 Stefan Hajnoczi 2018-12-20 09:39:38 UTC
I have backported the QEMU patches necessary for shared storage live migration without cache=none.

Steps for verification:
1. Launch the guest on the migration source:
  (src)# qemu-system-x86_64 ... -drive if=virtio,file=path/to/shared/test.img,format=raw,cache=writeback
2. Launch the guest on the migration destination:
  (dst)# qemu-system-x86_64 ... -drive if=virtio,file=path/to/shared/test.img,format=raw,cache=writeback,file.x-check-cache-dropped=on -incoming tcp::1234
3. Start the migration:
  (src)(qemu) migrate tcp:dst:1234

Expected results:
The live migration completes successfully.

The x-check-cache-dropped=on option validates that pages are not already in memory on the destination side after QEMU invalidates the cache.  If this test fails you will see an error message.

Comment 19 Danilo de Paula 2019-02-05 17:28:38 UTC
This was moved from RHEL-7 to RHEL-8.
Patch that was sent is marked as rejected. So moving this back to assigned until is resent.

Stefan, I believe you will agree with this, right?