This service will be undergoing maintenance at 00:00 UTC, 2016-08-01. It is expected to last about 1 hours
Bug 875487 - 3.2 Failed to break BOND and attach custom MTU networks while VM is running
3.2 Failed to break BOND and attach custom MTU networks while VM is running
Status: CLOSED ERRATA
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm (Show other bugs)
3.2.0
Unspecified Unspecified
medium Severity medium
: ---
: 3.2.0
Assigned To: lpeer
Meni Yakove
network
:
Depends On:
Blocks: 862559 888315 915537
  Show dependency treegraph
 
Reported: 2012-11-11 10:43 EST by Meni Yakove
Modified: 2016-02-10 14:49 EST (History)
10 users (show)

See Also:
Fixed In Version: vdsm-4.10.2-2.0
Doc Type: Bug Fix
Doc Text:
Previously it was not possible to break a bond and attach custom MTU networks to virtual machines while those virtual machines are running. In vdsm-4.10.2-2.0, it is possible to break a bond and attach custom MTU networks to virtual machines while those virtual machines are running.
Story Points: ---
Clone Of:
: 888315 (view as bug list)
Environment:
Last Closed: 2013-06-10 16:34:20 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Network
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
vdsm.log (9.71 MB, text/x-log)
2012-11-11 10:44 EST, Meni Yakove
no flags Details

  None (edit)
Description Meni Yakove 2012-11-11 10:43:38 EST
Description of problem:
I have network vm_mtu_9000 attach to bond0 and VM1 is running with this network attached to it.
When I try to break bond0 and attach vm_mtu_9000 to interface I got error:

MainProcess|Thread-205192::ERROR::2012-11-11 13:45:57,309::configNetwork::1373::setupNetworks::(setupNetworks) (23, "delNetwork: ['eth2', 'eth3', 'vnet1'] are not all nics enslaved to bond0")
Traceback (most recent call last):
  File "/usr/share/vdsm/configNetwork.py", line 1315, in setupNetworks
    implicitBonding=False)
  File "/usr/share/vdsm/configNetwork.py", line 1068, in delNetwork
    (nics, bonding))
ConfigNetworkError: (23, "delNetwork: ['eth2', 'eth3', 'vnet1'] are not all nics enslaved to bond0")
MainProcess|Thread-205192::ERROR::2012-11-11 13:45:57,318::supervdsmServer::68::SuperVdsm.ServerCallback::(wrapper) Error in setupNetworks
Traceback (most recent call last):
  File "/usr/share/vdsm/supervdsmServer.py", line 66, in wrapper
    return func(*args, **kwargs)
  File "/usr/share/vdsm/supervdsmServer.py", line 118, in setupNetworks
    return configNetwork.setupNetworks(networks, bondings, **options)
  File "/usr/share/vdsm/configNetwork.py", line 1315, in setupNetworks
    implicitBonding=False)
  File "/usr/share/vdsm/configNetwork.py", line 1068, in delNetwork
    (nics, bonding))
ConfigNetworkError: (23, "delNetwork: ['eth2', 'eth3', 'vnet1'] are not all nics enslaved to bond0")
Thread-205192::ERROR::2012-11-11 13:45:57,319::API::1138::vds::(setupNetworks) delNetwork: ['eth2', 'eth3', 'vnet1'] are not all nics enslaved to bond0
Traceback (most recent call last):
  File "/usr/share/vdsm/API.py", line 1136, in setupNetworks
    supervdsm.getProxy().setupNetworks(networks, bondings, options)
  File "/usr/share/vdsm/supervdsm.py", line 69, in __call__
    return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 60, in <lambda>
    **kwargs)
  File "<string>", line 2, in setupNetworks
  File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod
    raise convert_to_error(kind, result)
ConfigNetworkError: (23, "delNetwork: ['eth2', 'eth3', 'vnet1'] are not all nics enslaved to bond0")



Version-Release number of selected component (if applicable):
vdsm-4.9.6-41.0.el6_3.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Create BOND and attach network with costum MTU.(vm_mtu_9000)
2.Create VM with network vm_mtu_9000
3.Open setupnetworks and try to break the bond.
  
Actual results:
Faild with the above exaction
Comment 1 Meni Yakove 2012-11-11 10:44:50 EST
Created attachment 642902 [details]
vdsm.log
Comment 2 Meni Yakove 2012-11-11 10:46:38 EST
Steps to Reproduce:
1.Create BOND and attach network with costum MTU.(vm_mtu_9000)
2.Create VM with network vm_mtu_9000 and start the VM
3.Open setupnetworks and try to break the bond while the VM is running.
Comment 4 Igor Lvovsky 2012-11-21 07:01:33 EST
http://gerrit.ovirt.org/#/c/9384
Comment 5 Igor Lvovsky 2012-11-22 08:20:15 EST
the current solution still doesn't allow to break the bond, but raise proper error and leave the state as is.
In the future we might consider to allow to break the bond and disconnect VM from the external world
Comment 9 Meni Yakove 2013-01-07 04:12:16 EST
Verified on:
vdsm-4.10.2-3.0.el6ev.x86_64
rhevm-3.2.0-4.el6ev.noarch


Error:

navy-vds3:

    Cannot setup Networks. The following VMs are actively using the Logical Network: vm32. Please stop the VMs and try again.

Get proper error.
Comment 12 Cheryn Tan 2013-04-03 03:02:25 EDT
This bug is currently attached to errata RHBA-2012:14332. If this change is not to be documented in the text for this errata please either remove it from the errata, set the requires_doc_text flag to minus (-), or leave a "Doc Text" value of "--no tech note required" if you do not have permission to alter the flag.

Otherwise to aid in the development of relevant and accurate release documentation, please fill out the "Doc Text" field above with these four (4) pieces of information:

* Cause: What actions or circumstances cause this bug to present.

* Consequence: What happens when the bug presents.

* Fix: What was done to fix the bug.

* Result: What now happens when the actions or circumstances above occur. (NB: this is not the same as 'the bug doesn't present anymore')

Once filled out, please set the "Doc Type" field to the appropriate value for the type of change made and submit your edits to the bug.

For further details on the Cause, Consequence, Fix, Result format please refer to:

https://bugzilla.redhat.com/page.cgi?id=fields.html#cf_release_notes

Thanks in advance.
Comment 14 errata-xmlrpc 2013-06-10 16:34:20 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-0886.html

Note You need to log in before you can comment on or make changes to this bug.