Bug 2173775

Summary: [qemu-kvm] RFE: Make bdrv_inactivate_all() asynchronous
Product: Red Hat Enterprise Linux 9 Reporter: Juan Quintela <quintela>
Component: qemu-kvmAssignee: Eric Blake <eblake>
qemu-kvm sub component: Storage QA Contact: aihua liang <aliang>
Status: CLOSED MIGRATED Docs Contact:
Severity: medium    
Priority: medium CC: coli, hreitz, jinzhao, juzhang, kwolf, vgoyal, virt-maint, xuwei
Version: unspecifiedKeywords: FutureFeature, MigratedToJIRA, Triaged
Target Milestone: rcFlags: vgoyal: needinfo? (kwolf)
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-09-22 16:19:57 UTC Type: Story
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Juan Quintela 2023-02-27 23:35:08 UTC
Description of problem:

brdv_inactivate_all() is a synchronous function.
When we call it on migration, we have to "potentially" wait a lot of time for it to finish, it depends on how much dirty state there are for the block devices.
But in migration we still have to send the last pages of RAM (i.e. by default 300ms, but with some downloads we allow a downtime of 1second).  We actually do:
(see migration/savevm.c:qemu_savevm_state_complete_precopy()

stop_vm()
for each dirty page
   send_dirty_page()
bdrv_inactivate_all()

And what we would like to do is something like:

stop_vm()
bdrv_inactivate_all_start_async()
for each dirty page
   send_dirty_page()
brdv_inactivate_all_wait_for_completion()

We can discuss if we want to add a timeout parameter to this last function.  Or to be able to "undo" the bdrv_inactivate_start_async().

Why?

Because we know that we have a "downtime" limit for the completion stage.  Current code just finish the migration, don't matter how long it takes.  But what we really want is to detect if it is taking too long, and it that case, return to the iterative stage of migration.

Comment 2 Vivek Goyal 2023-03-08 21:52:14 UTC
Kevin, Hanna, WDYT about this issue.

Comment 4 RHEL Program Management 2023-09-22 16:19:02 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 5 RHEL Program Management 2023-09-22 16:19:57 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.