Description of problem: When moving VDS from one DC to another, and changing the ovirtmgmt network type from: VM to non-VM, engine throws error: "(1/1): Failed to apply changes for network(s) ovirtmgmt on host". After some time host becomes up. Version-Release number of selected component (if applicable): Red Hat Virtualization Manager Version: 4.1.0.2-0.1.el7 How reproducible: 100% Steps to Reproduce: 1. Create new DC and cluster. 2. Set VDS on existing DC to maintenance mode. 3. Moving VDS to the new DC and cluster. 4. Try to edit the ovirtmgmt on the new DC: unselect the VM network checkbox. 5. Wait for engine response. Actual results: Error: "(1/1): Failed to apply changes for network(s) ovirtmgmt on host host_mixed_2. (User: admin@internal-authz)". After some time: "Status of host host_mixed_2 was set to Up." Expected results: Engine and vDSM should apply the changes without errors. Additional info:
Created attachment 1243646 [details] engine and vdsm logs
(In reply to Mor from comment #1) > Created attachment 1243646 [details] > engine and vdsm logs Correction in step 3 --> "Add new VDS on DC and cluster." And in addition, this is also relevant to VM or non-VM network.
This bug report has Keywords: Regression or TestBlocker. Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.
From the logs it looks like an rpc communication problem between Engine and VDSM. It seems that Engine could not recover from the lost communication with the host (took ~20sec for DHCP to re-assign an address for the host), although application ping/s seem to have passed successfully (seen on vdsm.log). Several RPC issues have been solved recently (last two weeks or so) by Infra. Please try to see if it is reproducible on the latest version.
rhevm-4.1.0.4-0.1.el7.noarch seems to solve this issue. Move the bug to ON_QA and I will verify it.
vdsm-jsonrpc-java-1.3.8-1.el7ev.noarch