Bug 2020313 - If an upper case mac address is used in a heat template, live migration won't work in nova
Summary: If an upper case mac address is used in a heat template, live migration won't...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 16.1 (Train)
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: z8
: 16.1 (Train on RHEL 8.2)
Assignee: Alex Stupnikov
QA Contact: OSP DFG:Compute
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-11-04 15:16 UTC by David Hill
Modified: 2023-03-21 19:48 UTC (History)
10 users (show)

Fixed In Version: openstack-nova-20.4.1-1.20220112153422.1ee93b9.el8ost
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 2036690 (view as bug list)
Environment:
Last Closed: 2022-03-24 11:04:43 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1945646 0 None None None 2021-11-08 00:17:01 UTC
OpenStack gerrit 811947 0 None MERGED Ensure MAC addresses characters are in the same case 2021-11-08 00:17:32 UTC
Red Hat Issue Tracker OSP-10670 0 None None None 2021-11-15 12:29:49 UTC
Red Hat Product Errata RHSA-2022:0983 0 None None None 2022-03-24 11:04:58 UTC

Description David Hill 2021-11-04 15:16:18 UTC
Description of problem:
If an upper case mac address is used in a heat template, live migration won't work in nova due to MAC address :

2021-10-21 15:58:26.940 7 DEBUG nova.compute.manager [-] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo,LibvirtLiveMigrateBDMInfo,LibvirtLiveMigrateBDMInfo,LibvirtLiveMigrateBDMInfo],block_migration=False,di
sk_available_mb=18070528,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=False,filename='tmplkhegqcj',graphics_listen_addr_spice=127.0.0.1,graphics_
listen_addr_vnc=10.10.10.10,image_type='rbd',instance_relative_path='ad284904-ce29-4d1f-934f-03e5965f204f',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(d7eb618b-db4e-4ece-9db4-503ed
cfcea75),old_vol_attachment_ids={12b00e34-e66a-4844-9fd6-4245d782eb95='993e1340-5ac0-49cd-888e-8c45da904760',8b78fffa-2c93-4a7b-8010-6328b8934005='c25bd420-cc32-42a6-819e-c28cb975d33f',bce78f78-84eb-4281-8ca4-a19410872935='4f1abaa9-d3d3-4
72d-8ebf-6a11b6a9195c',be025b5e-59a1-4502-8aba-99ff7de3414d='27b27148-4353-4a0f-9adf-d1873461a458'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],
target_connect_addr='compute00-animal-dal10.ctlplane.ole.redhat.com',vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.6/site-packages/nova/compute/manager.py:7244
2021-10-21 15:58:26.943 7 DEBUG nova.virt.libvirt.driver [-] [instance: ad284904-ce29-4d1f-934f-03e5965f204f] Starting monitoring of live migration _live_migration /usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py:9189
2021-10-21 15:58:26.946 7 DEBUG nova.virt.libvirt.driver [-] [instance: ad284904-ce29-4d1f-934f-03e5965f204f] Operation thread is still running _live_migration_monitor /usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py:8990
2021-10-21 15:58:26.946 7 DEBUG nova.virt.libvirt.driver [-] [instance: ad284904-ce29-4d1f-934f-03e5965f204f] Migration not running yet _live_migration_monitor /usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py:8999
2021-10-21 15:58:26.961 7 DEBUG nova.virt.libvirt.migration [-] Find same serial number: pos=0, serial=12b00e34-e66a-4844-9fd6-4245d782eb95 _update_volume_xml /usr/lib/python3.6/site-packages/nova/virt/libvirt/migration.py:232
2021-10-21 15:58:26.964 7 DEBUG nova.virt.libvirt.migration [-] Find same serial number: pos=1, serial=8b78fffa-2c93-4a7b-8010-6328b8934005 _update_volume_xml /usr/lib/python3.6/site-packages/nova/virt/libvirt/migration.py:232
2021-10-21 15:58:26.966 7 DEBUG nova.virt.libvirt.migration [-] Find same serial number: pos=2, serial=be025b5e-59a1-4502-8aba-99ff7de3414d _update_volume_xml /usr/lib/python3.6/site-packages/nova/virt/libvirt/migration.py:232
2021-10-21 15:58:26.969 7 DEBUG nova.virt.libvirt.migration [-] Find same serial number: pos=3, serial=bce78f78-84eb-4281-8ca4-a19410872935 _update_volume_xml /usr/lib/python3.6/site-packages/nova/virt/libvirt/migration.py:232
2021-10-21 15:58:26.970 7 ERROR nova.virt.libvirt.driver [-] [instance: ad284904-ce29-4d1f-934f-03e5965f204f] Live Migration failure: '00:00:00:00:fa:0a'
2021-10-21 15:58:26.970 7 DEBUG nova.virt.libvirt.driver [-] [instance: ad284904-ce29-4d1f-934f-03e5965f204f] Migration operation thread notification thread_finished /usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py:9180
2021-10-21 15:58:27.448 7 DEBUG nova.virt.libvirt.migration [-] [instance: ad284904-ce29-4d1f-934f-03e5965f204f] VM running on src, migration failed _log /usr/lib/python3.6/site-packages/nova/virt/libvirt/migration.py:419




Version-Release number of selected component (if applicable):
All

How reproducible:
Always

Steps to Reproduce:
1. Create a heat template where you define uppercase mac_addresses
2. Deploy stack
3. Try to live migrate

Actual results:
Live migration fails

Expected results:
Never fails

Additional info:
If we go in the database directly and convert macs to lowercase and then stop/start the VMs, live migration then succeeds.  I've seen some instance network cache info debug logs containing the MAC in uppercase so perhaps the issue is just with instance network cache ?

Comment 17 errata-xmlrpc 2022-03-24 11:04:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat OpenStack Platform 16.1 (openstack-nova) security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:0983


Note You need to log in before you can comment on or make changes to this bug.