This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2024300 - [RFE] expose public interface to trigger qemu announce-self
Summary: [RFE] expose public interface to trigger qemu announce-self
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: libvirt
Version: 9.0
Hardware: All
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Laine Stump
QA Contact: yalzhang@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-11-17 18:36 UTC by smooney
Modified: 2024-01-21 04:25 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-09-22 12:19:56 UTC
Type: Story
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1903653 1 high CLOSED Instance live-migration observes ping lost OVN 2022-08-04 13:41:40 UTC
Red Hat Issue Tracker   RHEL-7047 0 None Migrated None 2023-09-22 12:19:46 UTC
Red Hat Issue Tracker RHELPLAN-103077 0 None None None 2021-11-17 18:37:18 UTC

Internal Links: 1903653 2012179

Description smooney 2021-11-17 18:36:40 UTC
Description of problem:

Currently when libvirt/qemu are used with ovn and openstack there is excessive packet loss after a live migration.
https://bugzilla.redhat.com/show_bug.cgi?id=1903653

This is caused by the fact openstack programs the requested chassis in the ovn to which limits ovn to only install openflow rule on the host listed in the requeted chassis. with out the requested chassis set the ovn southd on both the souce and dest host will fight over which chassis the port is currently on (its actully on both during the migration) and that put load on the ovn database and cause flows to be installed and reinstalled every time it changes.

as a result we cannot remove the use of requested chassis to allow the flow rules to be installed on the destination host and we cant update the requested chassis before we start the migration as that would remove the flow rules, disconnecting the vm.

As a result the RARP packets sent by QEMU when libvirt unpauses the vm on the destination host are currently lost and the vms mac is not updated on top of rack switchs until it send a packet.

one mitigation for this until OVN can be enhanced to support live migration
https://bugzilla.redhat.com/show_bug.cgi?id=2012179 
is to use the announce_self qemu monitor command to trigger the sending of RARP packets after OVN has installed the flows however that will taint the vm.

To that end we would like this to be exposed via a public libvirt api instead of relying on the internal qemu monitor command interface.


Version-Release number of selected component (if applicable):


How reproducible:

This is cause by complex race between nova notifiying ovn via neturon that the vm is now running on the destination host and qemu sendign the 3 RARP packets on vm unpasue at the completion of the live migration.

nova only starts this chain of notification once it received the migration_complete or post_copy_pause event form libvirt.

the former always happens after the vm has started running on the destination and the latter event only give nova a slight advatage to win the race so typically openstack will lose the race to install the flows before the final RARP packet is sent unless the system i effectively entirely idle.

This means the race will not always cause packet loss on development/test environment but happen much more often in production deployments.


Steps to Reproduce:

1. boot a vm with ovs networking and oepnflow rules programed by ovn
2. start a tcpdump on the destion host to track the sendign of the RARP packets
3. start a live migration in libvirt and wait for it to compelte
4  updated the request chassis with the destionat host name.

^ realistically this is not relevent to this RFE request but that is how you would simulate the race without actually deploying openstack ovn ectra.

Actual results:

The RARP packets are lost because the flow rules are not installed at the time they are sent.

Expected results:

The RARP packets are lost because the flow rules are not installed at the time they are sent.

we expect this to fail because we have not told ovn to install the flow until after the vm is running so we need a way to resend them once the flows are installed.

Additional info:

This is clearly an RFE request to workaround a lack fo a feature in OVN.
We have customer that will be impacted by this on osp 13 and 16 which are based on rhel 7 and 8.2/8.4 respectively.

while it would be ideal to backport this public api to rhel 7 we woudl like to use the qemu monitor command directly on older verison of libvirt instead to avoid the need for a libvirt backport and we can adapt to the new public interface when that is available.

Comment 6 Jaroslav Suchanek 2021-11-18 13:19:55 UTC
Laine, would you please triage this? Thanks.

Comment 7 Laine Stump 2021-11-19 01:27:11 UTC
My understanding is that the root cause of this problem is that the RARP packets (that are always sent by qemu when the guest is unpaused on the destination) are dropped/lost, and so incoming traffic goes to the wrong place until a packet is sent from the newly unpaused guest?

The first thing that needs saying is that, as Sean says, adding a command to trigger qemu's self-announce would would require a new API in libvirt, and new APIs cannot be backported - they require a rebase of the package. So if this new feature is only needed for a temporary mitigation until OVN is properly fixed, and if the mitigation is needed in a downstream libvirt that is not going to be rebased anytime soon, then the avenue of adding a libvirt API is a non-starter.

Possibly some creativity could result in a workable solution, e.g. if libvirt were to just always do a periodic self-announce for several seconds after a migrated guest was unplugged on the destination. (I mention this because it's pretty much the only thing I can think of that could be done without adding a new API). I had thought that QEMU already retransmitted the RARPS though - Dave, can you confirm or refute this? Maybe it just needs to continue a bit longer?

There was an RFE filed against libvirt years ago (before qemu exposed the self-announce thing) asking for something like this, but it has been lost to the sands of the "stale bug" policy.

Comment 8 Kashyap Chamarthy 2021-11-22 15:54:22 UTC
(In reply to Laine Stump from comment #7)
> My understanding is that the root cause of this problem is that the RARP
> packets (that are always sent by qemu when the guest is unpaused on the
> destination) are dropped/lost, and so incoming traffic goes to the wrong
> place until a packet is sent from the newly unpaused guest?

That's my understanding too.  (Aside on terminology: "Neutron" is the
networking component of OpenStack)

> The first thing that needs saying is that, as Sean says, adding a command to
> trigger qemu's self-announce would would require a new API in libvirt, and
> new APIs cannot be backported - they require a rebase of the package. So if
> this new feature is only needed for a temporary mitigation until OVN is
> properly fixed, and if the mitigation is needed in a downstream libvirt that
> is not going to be rebased anytime soon, then the avenue of adding a libvirt
> API is a non-starter.

I agree, it's fine if libvirt does not backport the API (and I don't
think it should, either).  This is more for the long-term.

Temporarily using the QMP passthrough for 'announce-self' is fine — 
because "this doesn't affect QEMU state / lifecycle in any way that 
impacts libvirt", as DanPB noted on IRC.

[...]

Comment 9 Dr. David Alan Gilbert 2021-11-24 12:32:56 UTC
(In reply to Laine Stump from comment #7)
> My understanding is that the root cause of this problem is that the RARP
> packets (that are always sent by qemu when the guest is unpaused on the
> destination) are dropped/lost, and so incoming traffic goes to the wrong
> place until a packet is sent from the newly unpaused guest?
> 
> The first thing that needs saying is that, as Sean says, adding a command to
> trigger qemu's self-announce would would require a new API in libvirt, and
> new APIs cannot be backported - they require a rebase of the package. So if
> this new feature is only needed for a temporary mitigation until OVN is
> properly fixed, and if the mitigation is needed in a downstream libvirt that
> is not going to be rebased anytime soon, then the avenue of adding a libvirt
> API is a non-starter.
> 
> Possibly some creativity could result in a workable solution, e.g. if
> libvirt were to just always do a periodic self-announce for several seconds
> after a migrated guest was unplugged on the destination. (I mention this
> because it's pretty much the only thing I can think of that could be done
> without adding a new API). I had thought that QEMU already retransmitted the
> RARPS though - Dave, can you confirm or refute this? Maybe it just needs to
> continue a bit longer?

Right it does send them a few times;  You can set the migration parameters:
   announce-initial  announce-max      announce-rounds   announce-step     

to increase the number and change the timing of the announcments.

> There was an RFE filed against libvirt years ago (before qemu exposed the
> self-announce thing) asking for something like this, but it has been lost to
> the sands of the "stale bug" policy.

It would be good to get it exposed - that's why it was added!

Comment 10 smooney 2021-11-24 13:21:42 UTC
(In reply to Dr. David Alan Gilbert from comment #9)
> (In reply to Laine Stump from comment #7)
> > My understanding is that the root cause of this problem is that the RARP
> > packets (that are always sent by qemu when the guest is unpaused on the
> > destination) are dropped/lost, and so incoming traffic goes to the wrong
> > place until a packet is sent from the newly unpaused guest?
> > 
> > The first thing that needs saying is that, as Sean says, adding a command to
> > trigger qemu's self-announce would would require a new API in libvirt, and
> > new APIs cannot be backported - they require a rebase of the package. So if
> > this new feature is only needed for a temporary mitigation until OVN is
> > properly fixed, and if the mitigation is needed in a downstream libvirt that
> > is not going to be rebased anytime soon, then the avenue of adding a libvirt
> > API is a non-starter.

I do not expect this to actully get fixed in OVN for a long time, perhaps that team
will find time to work on it but i expect this wont happen for a year or more.

i have tried to see if we can get an ETA https://bugzilla.redhat.com/show_bug.cgi?id=2012179#c1
But i suspect this will be used even after ovn has support for live migration both as a safety measure
and for other networking solutions.

> > 
> > Possibly some creativity could result in a workable solution, e.g. if
> > libvirt were to just always do a periodic self-announce for several seconds
> > after a migrated guest was unplugged on the destination. (I mention this
> > because it's pretty much the only thing I can think of that could be done
> > without adding a new API). I had thought that QEMU already retransmitted the
> > RARPS though - Dave, can you confirm or refute this? Maybe it just needs to
> > continue a bit longer?
> 
> Right it does send them a few times;
i belive it send them 3 times 

>  You can set the migration parameters:
>    announce-initial  announce-max      announce-rounds   announce-step 

interesting is this something we can set today via libvirt?

we could perhaps have it annouch once a second for 60 seconds by default and make this configureable in nova.
that might be sufficent to mitigate the issue.
so anoucne-initial=3 anounce-max=63 announce-rounds=60 announce-step=1 something like that?

> 
> to increase the number and change the timing of the announcments.
> 
> > There was an RFE filed against libvirt years ago (before qemu exposed the
> > self-announce thing) asking for something like this, but it has been lost to
> > the sands of the "stale bug" policy.
> 
> It would be good to get it exposed - that's why it was added!

ah that answer my first question yes it would be good to expose it and we woudl proably use that as our primary api
but it would be good to aslo have a way to manually tirger the annouch. i know that qemu annouches when we migrate automaticly
but im not sure it does it when we hot attach a netwrok interface and likely we should. granted for hot attach the assumtion is
that the guest will liekly do a dhcp requst aftger it brings up the nic or similar but it might make sense to trigger an annouce
on device attachment to futer reduce latency of mac propagation in this case.

Comment 11 Dr. David Alan Gilbert 2021-11-24 14:13:16 UTC
(In reply to smooney from comment #10)
> (In reply to Dr. David Alan Gilbert from comment #9)
> > (In reply to Laine Stump from comment #7)
> > > My understanding is that the root cause of this problem is that the RARP
> > > packets (that are always sent by qemu when the guest is unpaused on the
> > > destination) are dropped/lost, and so incoming traffic goes to the wrong
> > > place until a packet is sent from the newly unpaused guest?
> > > 
> > > The first thing that needs saying is that, as Sean says, adding a command to
> > > trigger qemu's self-announce would would require a new API in libvirt, and
> > > new APIs cannot be backported - they require a rebase of the package. So if
> > > this new feature is only needed for a temporary mitigation until OVN is
> > > properly fixed, and if the mitigation is needed in a downstream libvirt that
> > > is not going to be rebased anytime soon, then the avenue of adding a libvirt
> > > API is a non-starter.
> 
> I do not expect this to actully get fixed in OVN for a long time, perhaps
> that team
> will find time to work on it but i expect this wont happen for a year or
> more.
> 
> i have tried to see if we can get an ETA
> https://bugzilla.redhat.com/show_bug.cgi?id=2012179#c1
> But i suspect this will be used even after ovn has support for live
> migration both as a safety measure
> and for other networking solutions.
> 
> > > 
> > > Possibly some creativity could result in a workable solution, e.g. if
> > > libvirt were to just always do a periodic self-announce for several seconds
> > > after a migrated guest was unplugged on the destination. (I mention this
> > > because it's pretty much the only thing I can think of that could be done
> > > without adding a new API). I had thought that QEMU already retransmitted the
> > > RARPS though - Dave, can you confirm or refute this? Maybe it just needs to
> > > continue a bit longer?
> > 
> > Right it does send them a few times;
> i belive it send them 3 times 
> 
> >  You can set the migration parameters:
> >    announce-initial  announce-max      announce-rounds   announce-step 
> 
> interesting is this something we can set today via libvirt?

Not that I know of (ask a libvirt person though)

> we could perhaps have it annouch once a second for 60 seconds by default and
> make this configureable in nova.
> that might be sufficent to mitigate the issue.
> so anoucne-initial=3 anounce-max=63 announce-rounds=60 announce-step=1
> something like that?
> 
> > 
> > to increase the number and change the timing of the announcments.
> > 
> > > There was an RFE filed against libvirt years ago (before qemu exposed the
> > > self-announce thing) asking for something like this, but it has been lost to
> > > the sands of the "stale bug" policy.
> > 
> > It would be good to get it exposed - that's why it was added!
> 
> ah that answer my first question yes it would be good to expose it and we
> woudl proably use that as our primary api
> but it would be good to aslo have a way to manually tirger the annouch. i
> know that qemu annouches when we migrate automaticly
> but im not sure it does it when we hot attach a netwrok interface and likely
> we should. granted for hot attach the assumtion is
> that the guest will liekly do a dhcp requst aftger it brings up the nic or
> similar but it might make sense to trigger an annouce
> on device attachment to futer reduce latency of mac propagation in this case.

I think in the case of virtio-net it triggers some reannouncments from the guest in some cases
(including the migration case above), not sure what else does.

Note: Be careful when doing all this to make sure when you fail/cancel a migration you
do an announce again on the source to make sure you haven't partially swung it over to the
destination.

Comment 12 smooney 2021-11-24 14:59:15 UTC
> Note: Be careful when doing all this to make sure when you fail/cancel a
> migration you
> do an announce again on the source to make sure you haven't partially swung
> it over to the
> destination.


that is a very good point the current work in progress patch that was proposed only calls self anoucne in post_live_migrate from which we cannot cancel or revert.
if we were to revert however we might want to annouch on the souce again although i dont think currently libvirt/qemu will annouch until its about to unpause the
guest on the dest which will never result in us reverting.

we do need to be carful however but i think doing this only in post_live_migrate will be sufficet for now.

Comment 19 RHEL Program Management 2023-09-22 12:19:35 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 20 RHEL Program Management 2023-09-22 12:19:56 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.

Comment 21 Red Hat Bugzilla 2024-01-21 04:25:13 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.