Bug 1900273 - VM MAC Address changes everytime VM is restarted
Summary: VM MAC Address changes everytime VM is restarted
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Networking
Version: 2.5.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 4.8.0
Assignee: Petr Horáček
QA Contact: Meni Yakove
URL:
Whiteboard:
Depends On:
Blocks: 1902217
TreeView+ depends on / blocked
 
Reported: 2020-11-21 21:08 UTC by Clark Hale
Modified: 2021-07-27 14:22 UTC (History)
7 users (show)

Fixed In Version: kubemacpool-container-v4.8.0-9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1902217 (view as bug list)
Environment:
Last Closed: 2021-07-27 14:21:17 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
vma.yaml (1.41 KB, text/plain)
2021-03-25 15:20 UTC, Yossi Segev
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2021:2920 0 None None None 2021-07-27 14:22:27 UTC

Description Clark Hale 2020-11-21 21:08:40 UTC
Description of problem:
This is unfriendly and surprising behavior that I think may cause issues at customers.  

This can be solved by applying this:
https://docs.openshift.com/container-platform/4.6/virt/virtual_machines/vm_networking/virt-using-mac-address-pool-for-vms.html

However, this is unintuitive and the docs don't make it clear (at least on my reading) that this is something required.  Maybe this should be the default? or at least better highlighted in the Documentation or UI.

Version-Release number of selected component (if applicable):
2.5.0

How reproducible:
Always

Steps to Reproduce:
1. Create VM
2. Restart VM on another node

Actual results:
VM Gets new MAC Address

Expected results:
VM MAC address stays stable

Additional info:

Comment 1 Fabian Deutsch 2020-11-26 09:55:08 UTC
kubemacpool should prevent this.

@phoracek do ou have an idea what might be happening here?

Comment 2 Petr Horáček 2020-11-26 11:44:08 UTC
If I follow the bug description correctly, kubemacpool was not used in this case. The described issue is that it is not enabled by default (or more visible in our documentation).

The reason why is it still opt-in is that we want to make kubemacpool adopted incrementally. It is in a sensitive place of the system and we did not want to rush it too hard.

We plan to eventually enable it by default. Until then, we should probably find a better place for the documentation. Thanks a lot for raising this, it is really a valuable feedback.

Clark, until it becomes the default, would be a note in https://docs.openshift.com/container-platform/4.6/virt/virtual_machines/vm_networking/virt-using-the-default-pod-network-with-virt.html and https://docs.openshift.com/container-platform/4.6/virt/virtual_machines/vm_networking/virt-attaching-vm-multiple-networks.html good enough you think?

Comment 6 Clark Hale 2020-11-30 15:09:16 UTC
"Clark, until it becomes the default, would be a note in https://docs.openshift.com/container-platform/4.6/virt/virtual_machines/vm_networking/virt-using-the-default-pod-network-with-virt.html and https://docs.openshift.com/container-platform/4.6/virt/virtual_machines/vm_networking/virt-attaching-vm-multiple-networks.html good enough you think?"

@phoracek I think that would be good enough, but more optimal, but more work, might be to have a "Next Steps" section in the OpenShift Virtualization Installation section part that links out to various "configuration activities" AFTER installing the operator that a user may wish to do, like enabling this MAC Address pool.

Comment 7 Petr Horáček 2020-12-02 15:32:30 UTC
I like the idea with next steps. Let me quote you in the Documentation BZ sibling of this ticket: https://bugzilla.redhat.com/show_bug.cgi?id=1902217

Comment 8 Petr Horáček 2021-02-11 13:53:57 UTC
We now enabled KMP by default and so VMs should persist their MACs

Comment 9 Petr Horáček 2021-03-18 09:19:26 UTC
MAC pool should be now enabled by default. Moving to QE for confirmation.

Comment 10 Yossi Segev 2021-03-25 15:20:12 UTC
Created attachment 1766346 [details]
vma.yaml

Comment 11 Yossi Segev 2021-03-25 15:21:38 UTC
Verified on:                                                                                                                                                                                  
OCP version: 4.8.0-0.nightly-2021-03-25-063034
CNV version: 4.8.0
kubemacpool-container version: v4.8.0-10 (sha256:7615016aeaab1fe33cd8def110cec307d65a3c1663e55f5f1052c57bb8eb66a7)


Verified with the following scenario:
1. Create a new namespace:
$ oc create ns yoss-ns
namespace/yoss-ns created
$ oc project yoss-ns
Now using project "yoss-ns" on server "https://api.net-yoss-48.cnv-qe.rhcloud.com:6443".

2. Create a VM using the attached vma.yaml.
Note that this VM manifest contains:
 a. nodeSelector to schedule the VM on a specific node.
 b. Explicit definition of the default masquerade pod interface, in order for the KubeMacpool to be applied on it.
$ oc apply -f vma.yaml
virtualmachine.kubevirt.io/vma created

3. Check the KubeMacPool range: 
$ oc get configmap -n openshift-cnv kubemacpool-mac-range-config -ojsonpath={.data};echo
{"RANGE_END":"02:1d:14:ff:ff:ff","RANGE_START":"02:1d:14:00:00:00"}

4. Start the V<, and wait for it to run:
[cnv-qe-jenkins@net-yoss-48-79kq6-executor yossi]$ virtctl start vma
VM vma was scheduled to start  
[cnv-qe-jenkins@net-yoss-48-79kq6-executor yossi]$ oc get vmi -w
NAME   AGE   PHASE        IP    NODENAME
vma    6s    Scheduling         
vma    36s   Scheduled          net-yoss-48-79kq6-worker-0-8wpcp
vma    38s   Scheduled          net-yoss-48-79kq6-worker-0-8wpcp
vma    38s   Running      10.129.2.98   net-yoss-48-79kq6-worker-0-8wpcp
vma    38s   Running      10.129.2.98   net-yoss-48-79kq6-worker-0-8wpcp
vma    38s   Running      10.129.2.98   net-yoss-48-79kq6-worker-0-8wpcp

5. Connect to the VM:                                               
$ virtctl console vma
Successfully connected to vma console. The escape sequence is ^]
                                        
vma login: fedora
Password:
Last login: Tue Nov 17 09:47:49 on ttyS0 
[fedora@vma ~]$ 

6. Check the MAC address of the primary interface of the running VM:
[fedora@vma ~]$ ip link show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1350 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 02:1d:14:00:00:00 brd ff:ff:ff:ff:ff:ff
    altname enp1s0
Make sure that the MAC address is within the range you found in step #3.

7. Stop the VM:
 virtctl stop vma
VM vma was scheduled to stop

8. Edit the VM manifest, and change the nodeSelector value to the name of a different worker node in the cluster:
$ oc edit vm vma
virtualmachine.kubevirt.io/vma edited

9. Start the VM again, and wait for it to run:
$ virtctl start vma
VM vma was scheduled to start
[cnv-qe-jenkins@net-yoss-48-79kq6-executor yossi]$ oc get vmi vma
NAME   AGE   PHASE        IP    NODENAME
vma    2s    Scheduling
vma    57s   Scheduled         net-yoss-48-79kq6-worker-0-v2jts
vma    59s   Scheduled         net-yoss-48-79kq6-worker-0-v2jts
vma    59s   Running     10.128.2.87   net-yoss-48-79kq6-worker-0-v2jts
vma    59s   Running     10.128.2.87   net-yoss-48-79kq6-worker-0-v2jts
vma    59s   Running     10.128.2.87   net-yoss-48-79kq6-worker-0-v2jts

10. Connect to the VM again:
5. Connect to the VM:
$ virtctl console vma
Successfully connected to vma console. The escape sequence is ^]

vma login: fedora
Password:
Last login: Tue Nov 17 09:47:49 on ttyS0
[fedora@vma ~]$

11. Check the MAC address of the primary interface of the running VM again:
[fedora@vma ~]$ ip link show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1350 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 02:1d:14:00:00:00 brd ff:ff:ff:ff:ff:ff
    altname enp1s0

The MAC address remained the same as the one found in step #6.

Comment 14 errata-xmlrpc 2021-07-27 14:21:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Virtualization 4.8.0 Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2920


Note You need to log in before you can comment on or make changes to this bug.