Bug 1415748

Summary: Vdsm fail to apply network changes when management network is non-VM / VM
Product: [oVirt] ovirt-engine Reporter: Mor <mkalfon>
Component: BLL.NetworkAssignee: Edward Haas <edwardh>
Status: CLOSED CURRENTRELEASE QA Contact: Meni Yakove <myakove>
Severity: high Docs Contact:
Priority: high    
Version: 4.1.0.2CC: bugs, mburman, mkalfon, pkliczew, ylavi
Target Milestone: ovirt-4.1.1Keywords: Automation, Regression
Target Release: ---Flags: rule-engine: ovirt-4.1+
rule-engine: blocker+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-04-21 09:40:45 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Network RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
engine and vdsm logs none

Description Mor 2017-01-23 16:08:42 UTC
Description of problem:
When moving VDS from one DC to another, and changing the ovirtmgmt network type from: VM to non-VM, engine throws error: "(1/1): Failed to apply changes for network(s) ovirtmgmt on host". After some time host becomes up.

Version-Release number of selected component (if applicable):
Red Hat Virtualization Manager Version: 4.1.0.2-0.1.el7

How reproducible:
100%

Steps to Reproduce:
1. Create new DC and cluster.
2. Set VDS on existing DC to maintenance mode. 
3. Moving VDS to the new DC and cluster.
4. Try to edit the ovirtmgmt on the new DC: unselect the VM network checkbox.
5. Wait for engine response.

Actual results:
Error: "(1/1): Failed to apply changes for network(s) ovirtmgmt on host host_mixed_2. (User: admin@internal-authz)". After some time: "Status of host host_mixed_2 was set to Up."

Expected results:
Engine and vDSM should apply the changes without errors.

Additional info:

Comment 1 Mor 2017-01-23 16:11:13 UTC
Created attachment 1243646 [details]
engine and vdsm logs

Comment 2 Mor 2017-01-24 12:03:46 UTC
(In reply to Mor from comment #1)
> Created attachment 1243646 [details]
> engine and vdsm logs

Correction in step 3 --> "Add new VDS on DC and cluster."

And in addition, this is also relevant to VM or non-VM network.

Comment 3 Red Hat Bugzilla Rules Engine 2017-01-24 13:12:27 UTC
This bug report has Keywords: Regression or TestBlocker.
Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.

Comment 4 Edward Haas 2017-02-05 09:15:46 UTC
From the logs it looks like an rpc communication problem between Engine and VDSM.
It seems that Engine could not recover from the lost communication with the host (took ~20sec for DHCP to re-assign an address for the host), although application ping/s seem to have passed successfully (seen on vdsm.log).

Several RPC issues have been solved recently (last two weeks or so) by Infra.
Please try to see if it is reproducible on the latest version.

Comment 5 Meni Yakove 2017-02-05 09:46:18 UTC
rhevm-4.1.0.4-0.1.el7.noarch seems to solve this issue.
Move the bug to ON_QA and I will verify it.

Comment 6 Meni Yakove 2017-02-06 09:28:34 UTC
vdsm-jsonrpc-java-1.3.8-1.el7ev.noarch