Hide Forgot
Description of problem: After add a logical network to rhevh host in rhevm, if it's in the same network with rhevm, the default route interface will change to the logical network device, which will cause rhevh lost its connection to rhevm. [root@localhost admin]# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.66.8.0 * 255.255.252.0 U 0 0 0 rhevm 10.66.8.0 * 255.255.252.0 U 0 0 0 tt link-local * 255.255.0.0 U 1016 0 0 rhevm link-local * 255.255.0.0 U 1017 0 0 tt default 10.66.11.254 0.0.0.0 UG 0 0 0 tt Version-Release number of selected component (if applicable): 6.2-20120117.0 How reproducible: 100%. Steps to Reproduce: 1. Register rhevh to rhevm and approve it in rhevm. 2. Add a logical network, which name is "tt". 3. In the host's network interface panel, select the nic which isn't used by rhevm. 4. add/edit network for it, use the logical network "tt". 5. after the logical network is up, wait for few minutes. Actual results: Rhevh status change to "connect" and become "Non response" finally. Expected results: RHEVH should keep connection to rhevm after add logical network. Additional info: If the the added logical network device are not in the same network with rhevm, have no issue.
Once RHEV-H is registered to RHEV-M, all network config is owned by RHEV-M. This needs to be looked at by the vdsm and/or RHEV-M teams to see how/if it can be fixed
it also happened on RHEL63 host, tested with vdsm-4.9.6-7 & rhevm si2.1: steps: 1. the RHEL63 host have three nics, all nics are in the same network and can obtain IP in the same network segment. 2. add the host to rhevm, the default management network "rhevm" is on eth0. 3. create a logical network with other nics (bonding with eth1 eth2). 4. when the logical network is up. 5. rhevm lost connection with RHEL63 host due to the the default route changed from "rhevm" network to the logical network device. This let me guess whether can the logical network be in the same network segment with rhevm?
Since RHEL 6.3 External Beta has begun, and this bug remains unresolved, it has been rejected as it is not proposed as exception or blocker. Red Hat invites you to ask your support representative to propose this request, if appropriate and relevant, in the next release of Red Hat Enterprise Linux.
could you supply the output of getVdsCaps and ifconfig from the host console (after networking is lost)? What happens if you `service network restart`? Would you provide vdsm.log with the relevant addNetwork calls? Having two network with the same subnet is bound to end up in tears. Since this does not seem to be a regression from 6.2, we would have to tackle it in some other time.
Created attachment 584581 [details] vdsm.log getVdsCaps output and ifconfig (In reply to comment #5) > could you supply the output of getVdsCaps and ifconfig from the host console > (after networking is lost)? What happens if you `service network restart`? > Would you provide vdsm.log with the relevant addNetwork calls? > attached the vdsm.log, getVdsCaps outout and ifconfig. > Having two network with the same subnet is bound to end up in tears. Since this > does not seem to be a regression from 6.2, we would have to tackle it in some > other time.
Sorry for the long delay in my response, but ifconfig output was not actually attached. I am wondering whether eth1 which was attached to the tt network was actually up and running, and accepting communication from the selected subnet. If not, the outcome is quite expectable, and this is not a bug. Please re-open if you can reproduce this with a recent vdsm build (> 4..9.6-23) having tt with a different subnet than rhevm's, and a working nic.