Bug 1415748 - Vdsm fail to apply network changes when management network is non-VM / VM
Summary: Vdsm fail to apply network changes when management network is non-VM / VM
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Network
Version: 4.1.0.2
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ovirt-4.1.1
: ---
Assignee: Edward Haas
QA Contact: Meni Yakove
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-01-23 16:08 UTC by Mor
Modified: 2017-04-21 09:40 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-04-21 09:40:45 UTC
oVirt Team: Network
rule-engine: ovirt-4.1+
rule-engine: blocker+


Attachments (Terms of Use)
engine and vdsm logs (259.29 KB, application/octet-stream)
2017-01-23 16:11 UTC, Mor
no flags Details

Description Mor 2017-01-23 16:08:42 UTC
Description of problem:
When moving VDS from one DC to another, and changing the ovirtmgmt network type from: VM to non-VM, engine throws error: "(1/1): Failed to apply changes for network(s) ovirtmgmt on host". After some time host becomes up.

Version-Release number of selected component (if applicable):
Red Hat Virtualization Manager Version: 4.1.0.2-0.1.el7

How reproducible:
100%

Steps to Reproduce:
1. Create new DC and cluster.
2. Set VDS on existing DC to maintenance mode. 
3. Moving VDS to the new DC and cluster.
4. Try to edit the ovirtmgmt on the new DC: unselect the VM network checkbox.
5. Wait for engine response.

Actual results:
Error: "(1/1): Failed to apply changes for network(s) ovirtmgmt on host host_mixed_2. (User: admin@internal-authz)". After some time: "Status of host host_mixed_2 was set to Up."

Expected results:
Engine and vDSM should apply the changes without errors.

Additional info:

Comment 1 Mor 2017-01-23 16:11:13 UTC
Created attachment 1243646 [details]
engine and vdsm logs

Comment 2 Mor 2017-01-24 12:03:46 UTC
(In reply to Mor from comment #1)
> Created attachment 1243646 [details]
> engine and vdsm logs

Correction in step 3 --> "Add new VDS on DC and cluster."

And in addition, this is also relevant to VM or non-VM network.

Comment 3 Red Hat Bugzilla Rules Engine 2017-01-24 13:12:27 UTC
This bug report has Keywords: Regression or TestBlocker.
Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.

Comment 4 Edward Haas 2017-02-05 09:15:46 UTC
From the logs it looks like an rpc communication problem between Engine and VDSM.
It seems that Engine could not recover from the lost communication with the host (took ~20sec for DHCP to re-assign an address for the host), although application ping/s seem to have passed successfully (seen on vdsm.log).

Several RPC issues have been solved recently (last two weeks or so) by Infra.
Please try to see if it is reproducible on the latest version.

Comment 5 Meni Yakove 2017-02-05 09:46:18 UTC
rhevm-4.1.0.4-0.1.el7.noarch seems to solve this issue.
Move the bug to ON_QA and I will verify it.

Comment 6 Meni Yakove 2017-02-06 09:28:34 UTC
vdsm-jsonrpc-java-1.3.8-1.el7ev.noarch


Note You need to log in before you can comment on or make changes to this bug.