Bug 1955195

Summary: RFE: Allow killing stuck migration connection
Product: Red Hat Enterprise Linux 9 Reporter: Michal Privoznik <mprivozn>
Component: libvirtAssignee: Virtualization Maintenance <virt-maint>
libvirt sub component: General QA Contact: Fangge Jin <fjin>
Status: CLOSED MIGRATED Docs Contact:
Severity: medium    
Priority: medium CC: ailan, fdeutsch, jdenemar, jsuchane, lmen, pkrempa, virt-maint, vromanso, xuzhang, yalzhang, ycui
Version: 9.0Keywords: FutureFeature, MigratedToJIRA, Triaged
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1956897 (view as bug list) Environment:
Last Closed: 2023-09-22 12:20:12 UTC Type: Story
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1956897    
Bug Blocks:    

Description Michal Privoznik 2021-04-29 16:42:52 UTC
Description of problem:

If a migration is terminated because of broken TCP connection it may a long time before QEMU sees connection closing. Meanwhile it may be stuck trying to send data to the other side. Upstream has a solution for this: "yank" command:

https://gitlab.com/qemu-project/qemu/-/blob/master/qapi/yank.json

We should adopt it in libvirt.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

See discussion to bug 1945532.

Comment 1 Peter Krempa 2021-04-30 07:56:22 UTC
I'm not sure that based on bug 1945532 implementing support for 'yank' is justified.

Firstly this bug doesn't mention it but the connection get's stuck when 'blockdev-del'-ing the NBD connections used for non-shared-storage migration. This also needs to be investigated by the qemu team, since the closing handshake of NBD isn't very useful and creates a pointless failure scenario.

Secondly it's important to mention that the bug occurs with a non-standard topology, where the NBD and migration connections are proxied via a third-party tool which can have influence on the behaviour.

Thirdly libvirt added the support for migration using UNIX sockets so that it can be proxied specifically to use in Kubevirt, thus kubevirt should start using it before we spend time on something which might not be needed later on.

As of such I don't think we should invest into implementing this until:

1) qemu decides that the NBD connection behaviour is okay from their point of view
2) Kubevirt investigates that their proxy isn't the main problem here and try the migration with unix sockets which may have better error handling.

Comment 2 Michal Privoznik 2021-05-01 09:04:47 UTC
(In reply to Peter Krempa from comment #1)
> I'm not sure that based on bug 1945532 implementing support for 'yank' is
> justified.
> 
> Firstly this bug doesn't mention it but the connection get's stuck when
> 'blockdev-del'-ing the NBD connections used for non-shared-storage
> migration. This also needs to be investigated by the qemu team, since the
> closing handshake of NBD isn't very useful and creates a pointless failure
> scenario.

Can you chime into the bug conversation and mention it please?

> 
> Secondly it's important to mention that the bug occurs with a non-standard
> topology, where the NBD and migration connections are proxied via a
> third-party tool which can have influence on the behaviour.

Agreed, but regardless - connection disruption can happen for whatever reason and it may take quite a long time until TCP realizes that. Wouldn't 'yank' help in such situation?

> 
> Thirdly libvirt added the support for migration using UNIX sockets so that
> it can be proxied specifically to use in Kubevirt, thus kubevirt should
> start using it before we spend time on something which might not be needed
> later on.

IIUC their proxy consists of two parts: one that's redirecting TCP onto UNIX socket and vice verca and the other that's passing data between these UNIX sockets. My hunch is that it's the latter that's disrupting the migration (because otherwise QEMU would see TCP FIN packet) and given that, while taking away another moving part is generally good it may not help. Anyway - this is discussion we should be having on that bug.

> 
> As of such I don't think we should invest into implementing this until:
> 
> 1) qemu decides that the NBD connection behaviour is okay from their point
> of view
> 2) Kubevirt investigates that their proxy isn't the main problem here and
> try the migration with unix sockets which may have better error handling.

I still think that it beneficial for libvirt to implement this command even if it doesn't fix the Kubevirt bug.

Comment 3 Jiri Denemark 2021-05-03 13:25:50 UTC
I think there are two parts of this:

1) yank could be used unconditionally to abort migration block jobs in case
we're aborting the whole migration as it makes little sense trying to finish
them nicely. This should make the probability of getting stuck in QMP a bit
lower.

2) virDomainAbortJob could get a new flag to forcefully abort the migration
using yank instead of migrate_cancel.

Both part need the yank command to be supported by QEMU, of course. And in
addition to that at least the second part would need support for out-of-band
QMP commands (OOB) in libvirt. The first part might be possible even without
OOB, but I don't know for sure. In other words, while 1) might be quite
simple, 2) will most likely be not simple at all.

Comment 4 Dr. David Alan Gilbert 2021-05-04 08:27:57 UTC
I suspect getting the use of 'yank' right needs some care to make sure we don't kill any connections we want to keep around,
and to make sure the qmp is always free to issue a yank.

Comment 8 Fabian Deutsch 2021-06-07 14:17:22 UTC
@vromanso how important is this for us? Please adjust severity

Comment 10 John Ferlan 2021-09-08 13:30:52 UTC
Bulk update: Move RHEL-AV bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release.

Comment 15 RHEL Program Management 2023-09-22 12:19:40 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 16 RHEL Program Management 2023-09-22 12:20:12 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.