Bug 1909493 - The VM eth0 binding on pod networking could not get DHCP ip address after a live migration completed
Summary: The VM eth0 binding on pod networking could not get DHCP ip address after a l...
Keywords:
Status: CLOSED DUPLICATE of bug 1907988
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Virtualization
Version: 2.6.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: sgott
QA Contact: Israel Pinto
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-12-20 12:49 UTC by Feng Wang
Modified: 2021-01-03 07:37 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-01-03 07:37:10 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Feng Wang 2020-12-20 12:49:26 UTC
Description of problem:

A VM live migration completed, the pod got a new ip address, however, the VM eth0 binding on pod networking could not get DHCP ip address. Before live migration, the VM eth0 could get DHCP ip address

Version-Release number of selected component (if applicable):

OCP 4.6.7

How reproducible:

Install OCP 4.6.7 three nodes cluster and install CNV 2.5.2 operator, then create a RHEL 7.8 VM, do live migration, the issue will occur


Steps to Reproduce:
1. Install OCP 4.6.7 three nodes cluster
2. Install OpenShift Virtualization operator
3. Create a VM and install RHEL 7.8 in VM, add tow network cards to VM, one is nic-0 and bind to pod networking, the other is nic-1 and bind to multus cnv-bridge networking
4. Do a live migration, the pod ip address will change, but the VM eth0 could not get DHCP ip address

Actual results:

The VM eth0 could not get DHCP ip address

Expected results:

The VM eth0 could get DHCP ip address normally

Additional info:

Comment 1 Yaacov Zamir 2020-12-21 08:23:30 UTC
This sounds like a virtualization issue, moving to virtualization.

Comment 3 Alona Kaplan 2020-12-31 08:28:39 UTC
May be a duplicate of https://bugzilla.redhat.com/1907988

Feng, can you please add the `ip -a` result of the vm before and after the migration. And the log of the virt-launcher compute container of the vm.

Comment 4 Feng Wang 2021-01-03 02:26:15 UTC
Before:

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc mq state UP group default qlen 1000
    link/ether 02:00:00:59:99:21 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.2/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 86313588sec preferred_lft 86313588sec
    inet6 fe80::40e7:84f9:8898:c51c/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether a2:75:1f:09:86:32 brd ff:ff:ff:ff:ff:ff
    inet 192.168.190.190/24 brd 192.168.190.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::7d18:f8d6:2d75:3b4e/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

After migration but no restarted VM:

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc mq state UP group default qlen 1000
    link/ether 02:00:00:59:99:21 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.2/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 86313527sec preferred_lft 86313527sec
    inet6 fe80::40e7:84f9:8898:c51c/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether a2:75:1f:09:86:32 brd ff:ff:ff:ff:ff:ff
    inet 192.168.190.190/24 brd 192.168.190.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::7d18:f8d6:2d75:3b4e/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

Then reboot the VM from console:

# reboot

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc mq state UP group default qlen 1000
    link/ether 02:00:00:59:99:21 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::40e7:84f9:8898:c51c/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether a2:75:1f:09:86:32 brd ff:ff:ff:ff:ff:ff
    inet 192.168.190.190/24 brd 192.168.190.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::7d18:f8d6:2d75:3b4e/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

The VM eth0 mac address is not changed, but after reboot the VM eth0 can not get DHCP address

Comment 5 Feng Wang 2021-01-03 02:39:09 UTC
There are two virt-launcher pods, one is virt-launcher-mysql-wh2sd with Completed status, and the other is virt-launcher-mysql-v6vrj with Running status, please get from google drive.

virt-launcher-mysql-wh2sd https://drive.google.com/file/d/1yEI5qB3z8NdpqbLz1JIe96vixkCoxdTp/view?usp=sharing
virt-launcher-mysql-v6vrj https://drive.google.com/file/d/1fOGN0wnw0QqLU4ZlRF1wYM6W0JFlO4S7/view?usp=sharing

Comment 6 Alona Kaplan 2021-01-03 07:37:10 UTC
Feng, thanks for the info.

The fact the vm loses it's internal ip just after restart from the console was missing in the original description.
Closing as duplicate of #1907988

*** This bug has been marked as a duplicate of bug 1907988 ***


Note You need to log in before you can comment on or make changes to this bug.