Bug 1194553
Summary: | VDSM script reset network configuration on every reboot when based on predefined bond | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Pavel Zhukov <pzhukov> | |
Component: | vdsm | Assignee: | Petr Horáček <phoracek> | |
Status: | CLOSED ERRATA | QA Contact: | Michael Burman <mburman> | |
Severity: | urgent | Docs Contact: | ||
Priority: | urgent | |||
Version: | 3.5.0 | CC: | aleksandr.bembel, asegurap, audgiri, bazulay, bugs, danken, eedri, fdeutsch, gwatson, iheim, ldelouw, lpeer, lsurette, mburman, meverett, mgoldboi, mkalinin, myakove, phoracek, pmukhedk, pzhukov, rbalakri, rhodain, rmcswain, troels, ycui, yeylon, ykaul, ylavi | |
Target Milestone: | ovirt-3.6.0-rc | Keywords: | Reopened, ZStream | |
Target Release: | 3.6.0 | Flags: | ylavi:
Triaged+
|
|
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | v4.16.12.1 | Doc Type: | Bug Fix | |
Doc Text: |
ifcfg-bond* devices defined out of VDSM (manually or via the RHEV-H TUI) are removed during the upgrade of VDSM to 3.5.0.
There is no complete fix at current; the simplest workaround is to re-define bond devices via VDSM. For example:
vdsClient -s 0 setupNetworks bondings='{bond11:{nics:p1p3,p1p4}}'
vdsClient -s 0 setSafeNetworkConfig
Alternatively, bond devices can be re-defined via the engine. After upgrade, this results in created bond11 being persisted in VDSM's /var/lib/vdsm/persistence/netconf/bonds/bond11 and being available on reboot.
|
Story Points: | --- | |
Clone Of: | 1154399 | |||
: | 1205711 (view as bug list) | Environment: | ||
Last Closed: | 2016-03-09 19:31:42 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | Network | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1154399, 1213842 | |||
Bug Blocks: | 1205711 |
Description
Pavel Zhukov
2015-02-20 07:37:36 UTC
*** Bug 1194267 has been marked as a duplicate of this bug. *** Pavel, what's `chkconfig | grep network` on the affected hosts? I heard a report that `chkconfig network on` makes the problem go away. Could you verify that? (In reply to Dan Kenigsberg from comment #2) > Pavel, what's `chkconfig | grep network` on the affected hosts? > I heard a report that `chkconfig network on` makes the problem go away. > Could you verify that? Seems like it's active already. network 0:off 1:off 2:on 3:on 4:on 5:on 6:off For RHEL host we can work this around by using ifcfg as persistence store. In that case vdsm doesn't complain with manually created bond devices @Pavel could you grant me an access to your machine? I would like to check logs, versions etc. Then it would be great if we could try to setup machine to the state before upgrade and try to upgrade it with some changes or extra logging. Sorry, Roman - I did not refresh my browser and did not see your recent update when I moved this bug to MODIFIED. I do not understand your report. This bug is about ifcfg-bond* files not existing upon upgrade. Is this the case with your reproduction? Can you attach the post-boot supervdsm.log? There may well be more issues regarding network upgrade on the node; I'm not sure that what you describe is the problem I'm trying to solve. I suspect that the other problems we see are related to the fact that ovirt-node restarts networking while vdsm is starting up and performing network config upgrade. Hence this bug can go back to ON_QA. Hi Yaniv, Please help to organize the Fixed in version and target release for this BZ, cause definitely we have some mess here. Thank you, Please provide info needed in comment #23. Backported patch haven't passed QA #1205711 https://bugzilla.redhat.com/show_bug.cgi?id=1205711#c10 Any updates on this and the QA process since last week? Yes, please see the clone of this bug for the updates. Petr, can this bug get moved to MODIFIED as well? I hope I was requested for info by mistake... Relevant patches are merged. How this BZ is ON_QA? Do we have a rhev-h 3.6.0 ? (In reply to Michael Burman from comment #32) > How this BZ is ON_QA? > > Do we have a rhev-h 3.6.0 ? This affects RHEL as well, RHEV-H build should arrive soon. Can we verify this on RHEL or we need to wait for 3.6 RHEV-H? If we need to wait for RHEV-H please remove the ON_QA from the bug. (In reply to Meni Yakove from comment #34) > Can we verify this on RHEL or we need to wait for 3.6 RHEV-H? > If we need to wait for RHEV-H please remove the ON_QA from the bug. You need to VERIFY on both. Verified on - 3.6.0.3-0.1.el6 and : - Red Hat Enterprise Virtualization Hypervisor release 7.2 (20151104.0.el7ev) - vdsm-4.17.10.1-0.el7ev.noarch - ovirt-node-3.6.0-0.20.20151103git3d3779a.el7ev.noarch Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0362.html |