| Summary: | 3.1 - VDSM is not able to recover from changing the RHEVM network from STATIC IP to DHCP | ||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | zvi <zfridler> | ||||||||||||
| Component: | vdsm | Assignee: | Igor Lvovsky <ilvovsky> | ||||||||||||
| Status: | CLOSED WORKSFORME | QA Contact: | Martin Pavlik <mpavlik> | ||||||||||||
| Severity: | high | Docs Contact: | |||||||||||||
| Priority: | unspecified | ||||||||||||||
| Version: | 6.4 | CC: | abaron, bazulay, gklein, iheim, lpeer, mpavlik, syeghiay, yeylon, ykaul | ||||||||||||
| Target Milestone: | rc | ||||||||||||||
| Target Release: | --- | ||||||||||||||
| Hardware: | Unspecified | ||||||||||||||
| OS: | Unspecified | ||||||||||||||
| Whiteboard: | network | ||||||||||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||||||||||
| Doc Text: | Story Points: | --- | |||||||||||||
| Clone Of: | Environment: | ||||||||||||||
| Last Closed: | 2012-07-10 06:29:56 UTC | Type: | --- | ||||||||||||
| Regression: | --- | Mount Type: | --- | ||||||||||||
| Documentation: | --- | CRM: | |||||||||||||
| Verified Versions: | Category: | --- | |||||||||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||||||||
| Attachments: |
|
||||||||||||||
Created attachment 550285 [details]
vdsm log
VDSM LOG
Created attachment 550286 [details]
Routing table before restarting network service
Created attachment 550382 [details]
VDSM log after changing interface for management network ("rhevm")
Under the same settings - The Host will not have connectivity to RHEVM if the user changes the rhevm network from eth0 to eth1.
Again, restarting the "network" service will solve the problem.
management
Created attachment 550386 [details]
same problem when bonding rhevm
Under the same settings - The Host will not have connectivity to RHEVM if the
user attempt to bond rhevm network from a single interface (eth0) to eth0 + eth1.
Again, restarting the "network" service will solve the problem.
management
Since RHEL 6.3 External Beta has begun, and this bug remains unresolved, it has been rejected as it is not proposed as exception or blocker. Red Hat invites you to ask your support representative to propose this request, if appropriate and relevant, in the next release of Red Hat Enterprise Linux. Avi, could you reproduce this with the current code base? (In reply to comment #8) > Avi, could you reproduce this with the current code base? I tried to reproduce the issue. In slightly modified version the bug is reproducible. Steps to Reproduce: 1.Need to define two more Logical Networks beside RHEVM In this bug the following Logical Networks are defined within the host: A- RHEVM on em1 IP/subnet 10.34.66.61/24 B- VLAN 4001 on p1p1 IP :none C- DisplayNetwork on em2 IP:10.34.67.2/27 2. GUI: Set the HOST to Maintenance -> network interfaces -> Setup Host Networks -> click pencil icon on rhevm NIC -> switch from static to dhcp 3.Press OK, press OK HOST is LOST! no communication with RHEVM Only restarting the network service will solve this problem Virt Enterprise Virtualization Engine Manager Version: 3.1.0_0001-9.el6ev gpxe-roms-qemu-0.9.7-6.9.el6.noarch libvirt-0.9.10-18.el6.x86_64 libvirt-client-0.9.10-18.el6.x86_64 libvirt-debuginfo-0.9.10-18.el6.x86_64 libvirt-devel-0.9.10-18.el6.x86_64 libvirt-python-0.9.10-18.el6.x86_64 qemu-img-rhev-0.12.1.2-2.292.el6.x86_64 qemu-kvm-rhev-0.12.1.2-2.292.el6.x86_64 qemu-kvm-rhev-debuginfo-0.12.1.2-2.292.el6.x86_64 qemu-kvm-rhev-tools-0.12.1.2-2.292.el6.x86_64 vdsm-4.9.6-10.el6.x86_64 vdsm-cli-4.9.6-10.el6.noarch vdsm-python-4.9.6-10.el6.noarch Created attachment 584329 [details]
vdsm.log + engine.log
Hi Martin, In your test there could number of reasons for the described error: 1. Do you work in a secure mode? If so did you use the host IP or host name when adding the host to your setup? the reason I'm asking is that if you used the IP and now the IP changed the communication to the host will get lost because the certificates do not match. 2. Do you have DHCP service up and running when doing the test? could it be that you did not get IP from the DHCP service and this is the reason for loosing the connection? Hi Livnat, it seems that this issue is not reproducible any more. closing per comment #12 |
Description of problem: VDSM is not able to recover from changing the RHEVM network from STATIC IP to DHCP (DHCP server provide the same IP/subnet maskas the the initial static configuration) this is true even when "Check connectivity" is set to ON. This is the HOST's routing table before the change took place: [root@orchid-vds1 ~]# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.35.102.0 * 255.255.255.0 U 0 0 0 rhevm 10.35.102.0 * 255.255.255.0 U 0 0 0 vlan163 10.35.102.0 * 255.255.255.0 U 0 0 0 zvitest link-local * 255.255.0.0 U 1004 0 0 eth2 link-local * 255.255.0.0 U 1040 0 0 rhevm link-local * 255.255.0.0 U 1041 0 0 zvitest link-local * 255.255.0.0 U 1043 0 0 vlan163 default 10.35.102.254 0.0.0.0 UG 0 0 0 rhevm [root@orchid-vds1 ~]# Version-Release number of selected component (if applicable): 3.0.112.2 How reproducible: always Steps to Reproduce: 1.Need to define two more Logical Networks beside RHEVM In this bug the following Logical Networks are defined within the host: A- RHEVM on eth0 IP/subnet 10.35.102.24/24. B- vlan163 on eth2 IP :10.35.102.27/24 C- zvitest on eth3 IP:10.35.102.26 2. GUI: Set the HOST to Maintenance and click the "Edit Management Network" select DHCP and make sure to check the "Check connectivity" 3.Press OK Actual results: HOST is LOST! no communication with RHEVM Only restarting the network service will solve this problem Expected results: 1 RHEVM network should be set to a DHCP IP settings. 2. Connectivity should be retrieved (old IP settings) if the new DHCP address prevent communication with RHEVM Additional info ... ainProcess|Thread-2164::INFO::2012-01-02 16:23:35,675::configNetwork::417::root::(addNetwork) Adding bridge rhevm with vlan=None, bonding=None, nics=['eth0']. bondingOptions=None, options={'connectivityCheck': 'true', 'STP': 'no', 'connectivityTimeout': '120'}