Description of problem:
After successful migration new POD has new IP address, but "oc get vmi" and "oc describe vmi" show old one:
# oc get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
172.16.0.17 cnv-executor-dshchedr-master1.example.com <none>
virt-launcher-vm-cirros-pvc-cdi-kr6nh 0/1 Completed 0 44m 10.129.0.21 cnv-executor-dshchedr-node1.example.com <none>
virt-launcher-vm-cirros-pvc-cdi-rl6ww 1/1 Running 0 37m 10.130.0.23 cnv-executor-dshchedr-node2.example.com <none>
# oc get vmi
NAME AGE PHASE IP NODENAME
vm-cirros-pvc-cdi 52m Running 10.129.0.21 cnv-executor-dshchedr-node2.example.com
# oc describe vmi | grep Ip
Ip Address: 10.129.0.21
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. create MV
2. run migration
3. check IP Address
"oc get vmi" shows obsolete IP address
IP address should be updated
It's only definetly a bug, if the IP of a NIC with the default network wasn't changed.
IOW Only the IP of a NIC attached to the POD network must return the new IP (of the target pod) after migration.
I actually wonder if this bug is a dupe of bug 1693532
https://github.com/kubevirt/kubevirt/pull/2405 is related to this BZ (it prevents the scenario that allowed this to happen).
To verify: This scenario is no longer possible because migration via bridge interface is disabled.
Verified on hco-bundle-registry:v2.1.0-62:
Migration with Bridge - blocked
Migration with Masquerade - allowed, but "oc get vmi" still show obsolete ip address:
$ oc get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
virt-launcher-vm-fedora-cloudinit-g7dz9 2/2 Running 0 46s 10.130.0.65 host-172-16-0-34 <none> <none>
virt-launcher-vm-fedora-cloudinit-x8lfb 0/2 Completed 0 11m 10.131.0.56 host-172-16-0-18 <none> <none>
$ oc get vmi
NAME AGE PHASE IP NODENAME
vm-fedora-cloudinit 11m Running 10.131.0.56 host-172-16-0-34
VM is accessible thru new IP address, but in "oc get vmi" and "oc describe vmi" it shows old one.. The only way to find new IP address - check active pod ip.. Is it possible to update IP? We are updating NODENAME field, can we do the same with IP?
Good catch! As you've observed, we actually don't make any effort to track/change the IP field. Sorry for the confusion, but https://github.com/kubevirt/kubevirt/pull/2405 doesn't address this scenario.
Audrey, can we add a known issue for this bug?
The question was raised as to whether a service connected to the VM's pod would still be able to route to/reach the new pod post migration. Yes it will. Services generally use selectors to define a logical set of pods. Since the relevant metadata on the virt-launcher pod won't change the service will still effectively point to the correct pod.
Thus in that sense, this bug can be considered visual only. For this IP to matter, somebody would need to be trying to connect directly to the (old) pod IP from within the cluster.
Fabian, if this is no longer something for the release notes, can you please remove it from the PR? https://github.com/openshift/openshift-docs/pull/16756
@Audrey please keep it in the release notes for now, as we are not sure it will get fixed in 2.1 timeframe
This is a release note for 2.1 as this bug will not be fixed in 2.1.
This falls into the category of bugs which require us to update the VMI after live migration (and maybe other events in future like suspend/resume)
When a masquerade binding is used (which is the default in 2.2) then this bug should be gone.
Moving back to NEW. This issue in particular has not yet been adressed. As Denys explained in Comment #8, this bug is specifically about the apparent pod IP as seen from the rest of the cluster. We need to update this field in the VMI's status.
Should be fixed by https://github.com/kubevirt/kubevirt/pull/2963
What's the status of that PR?
While fixing this, we found a little corner case of race-condition to update VMI status when Masquerede and Guest agent was present
So this PR is kinda blocked by https://github.com/kubevirt/kubevirt/pull/3063
Both seem near finish to me
Latest PR fixing this issue https://github.com/kubevirt/kubevirt/pull/3642
This got merged upstream https://github.com/kubevirt/kubevirt/pull/3642