Description of problem: If an interface(other than rhevm) which is assigned to logical network goes down, host does not become non operational (waited for hour or so, tested more then once), in webadmin it is possible to see that affected interface is seen as down Version-Release number of selected component (if applicable): oVirt Enterprise Virtualization Engine Manager Version: 3.1.0_0001-9.el6ev How reproducible: 100% Steps to Reproduce: 1. Create new logical network 2. Assign the network to the host NIC 3. Activate the host 4. ssh/console to the host and run ifdown on logical network bridge and its physical interface (i.e. ifdown NET1 && ifdown em2) Actual results: Host remains operation when it has interface from cluster network down Expected results: Host remains operation when it has interface from cluster network down Additional info: I tried also to unplug network cable from the switch to simulate this problem result was same
Created attachment 585195 [details] vdsm.log + engine.log
Created attachment 585196 [details] configuration of tested LN
Created attachment 585197 [details] host is up / NIC is seen as down
Update to comment 1 Expected result: Host becomes non-operational when it has interface from cluster network goes down
Interface operational status after ifdowns cat /sys/class/net/em2/operstate down Network was added as "required" (see http://lists.ovirt.org/wiki/Features/Design/Network/Required_Networks)
Hi Martin, Is the network a required network? by default it is non-required and thus the host should not moved to non-operational. Please note that a audit log message should be issued
Hi Livnat, yes network was added as required (see comment 5) has default behavior for required network change recently? in my version oVirt Enterprise Virtualization Engine Manager Version: 3.1.0_0001-11.el6ev clusters -> your cluster -> Logical Networks -> Add Network will add new network as required (verified in DB) To make sure I reproduced the scenario again and attached logs.
Created attachment 589068 [details] vdsm.log + engine.log second attempt
*** Bug 840438 has been marked as a duplicate of this bug. ***
BZ#840438 reproduce this issue with rhevm 3.1 and cluster 3.0.
A suggested patch: http://gerrit.ovirt.org/#/c/6557
commit-id 8be0ab769a6b9066e0201c3ed678873be25b6b98