Description of problem: This is unfriendly and surprising behavior that I think may cause issues at customers. This can be solved by applying this: https://docs.openshift.com/container-platform/4.6/virt/virtual_machines/vm_networking/virt-using-mac-address-pool-for-vms.html However, this is unintuitive and the docs don't make it clear (at least on my reading) that this is something required. Maybe this should be the default? or at least better highlighted in the Documentation or UI. Version-Release number of selected component (if applicable): 2.5.0 How reproducible: Always Steps to Reproduce: 1. Create VM 2. Restart VM on another node Actual results: VM Gets new MAC Address Expected results: VM MAC address stays stable Additional info:
kubemacpool should prevent this. @phoracek do ou have an idea what might be happening here?
If I follow the bug description correctly, kubemacpool was not used in this case. The described issue is that it is not enabled by default (or more visible in our documentation). The reason why is it still opt-in is that we want to make kubemacpool adopted incrementally. It is in a sensitive place of the system and we did not want to rush it too hard. We plan to eventually enable it by default. Until then, we should probably find a better place for the documentation. Thanks a lot for raising this, it is really a valuable feedback. Clark, until it becomes the default, would be a note in https://docs.openshift.com/container-platform/4.6/virt/virtual_machines/vm_networking/virt-using-the-default-pod-network-with-virt.html and https://docs.openshift.com/container-platform/4.6/virt/virtual_machines/vm_networking/virt-attaching-vm-multiple-networks.html good enough you think?
"Clark, until it becomes the default, would be a note in https://docs.openshift.com/container-platform/4.6/virt/virtual_machines/vm_networking/virt-using-the-default-pod-network-with-virt.html and https://docs.openshift.com/container-platform/4.6/virt/virtual_machines/vm_networking/virt-attaching-vm-multiple-networks.html good enough you think?" @phoracek I think that would be good enough, but more optimal, but more work, might be to have a "Next Steps" section in the OpenShift Virtualization Installation section part that links out to various "configuration activities" AFTER installing the operator that a user may wish to do, like enabling this MAC Address pool.
I like the idea with next steps. Let me quote you in the Documentation BZ sibling of this ticket: https://bugzilla.redhat.com/show_bug.cgi?id=1902217
We now enabled KMP by default and so VMs should persist their MACs
MAC pool should be now enabled by default. Moving to QE for confirmation.
Created attachment 1766346 [details] vma.yaml
Verified on: OCP version: 4.8.0-0.nightly-2021-03-25-063034 CNV version: 4.8.0 kubemacpool-container version: v4.8.0-10 (sha256:7615016aeaab1fe33cd8def110cec307d65a3c1663e55f5f1052c57bb8eb66a7) Verified with the following scenario: 1. Create a new namespace: $ oc create ns yoss-ns namespace/yoss-ns created $ oc project yoss-ns Now using project "yoss-ns" on server "https://api.net-yoss-48.cnv-qe.rhcloud.com:6443". 2. Create a VM using the attached vma.yaml. Note that this VM manifest contains: a. nodeSelector to schedule the VM on a specific node. b. Explicit definition of the default masquerade pod interface, in order for the KubeMacpool to be applied on it. $ oc apply -f vma.yaml virtualmachine.kubevirt.io/vma created 3. Check the KubeMacPool range: $ oc get configmap -n openshift-cnv kubemacpool-mac-range-config -ojsonpath={.data};echo {"RANGE_END":"02:1d:14:ff:ff:ff","RANGE_START":"02:1d:14:00:00:00"} 4. Start the V<, and wait for it to run: [cnv-qe-jenkins@net-yoss-48-79kq6-executor yossi]$ virtctl start vma VM vma was scheduled to start [cnv-qe-jenkins@net-yoss-48-79kq6-executor yossi]$ oc get vmi -w NAME AGE PHASE IP NODENAME vma 6s Scheduling vma 36s Scheduled net-yoss-48-79kq6-worker-0-8wpcp vma 38s Scheduled net-yoss-48-79kq6-worker-0-8wpcp vma 38s Running 10.129.2.98 net-yoss-48-79kq6-worker-0-8wpcp vma 38s Running 10.129.2.98 net-yoss-48-79kq6-worker-0-8wpcp vma 38s Running 10.129.2.98 net-yoss-48-79kq6-worker-0-8wpcp 5. Connect to the VM: $ virtctl console vma Successfully connected to vma console. The escape sequence is ^] vma login: fedora Password: Last login: Tue Nov 17 09:47:49 on ttyS0 [fedora@vma ~]$ 6. Check the MAC address of the primary interface of the running VM: [fedora@vma ~]$ ip link show dev eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1350 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 02:1d:14:00:00:00 brd ff:ff:ff:ff:ff:ff altname enp1s0 Make sure that the MAC address is within the range you found in step #3. 7. Stop the VM: virtctl stop vma VM vma was scheduled to stop 8. Edit the VM manifest, and change the nodeSelector value to the name of a different worker node in the cluster: $ oc edit vm vma virtualmachine.kubevirt.io/vma edited 9. Start the VM again, and wait for it to run: $ virtctl start vma VM vma was scheduled to start [cnv-qe-jenkins@net-yoss-48-79kq6-executor yossi]$ oc get vmi vma NAME AGE PHASE IP NODENAME vma 2s Scheduling vma 57s Scheduled net-yoss-48-79kq6-worker-0-v2jts vma 59s Scheduled net-yoss-48-79kq6-worker-0-v2jts vma 59s Running 10.128.2.87 net-yoss-48-79kq6-worker-0-v2jts vma 59s Running 10.128.2.87 net-yoss-48-79kq6-worker-0-v2jts vma 59s Running 10.128.2.87 net-yoss-48-79kq6-worker-0-v2jts 10. Connect to the VM again: 5. Connect to the VM: $ virtctl console vma Successfully connected to vma console. The escape sequence is ^] vma login: fedora Password: Last login: Tue Nov 17 09:47:49 on ttyS0 [fedora@vma ~]$ 11. Check the MAC address of the primary interface of the running VM again: [fedora@vma ~]$ ip link show dev eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1350 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 02:1d:14:00:00:00 brd ff:ff:ff:ff:ff:ff altname enp1s0 The MAC address remained the same as the one found in step #6.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Virtualization 4.8.0 Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2920