Bug 1224778 - Default gateway not restored after management bridge creation
Summary: Default gateway not restored after management bridge creation
Keywords:
Status: CLOSED DUPLICATE of bug 1231799
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-hosted-engine-setup
Version: 3.5.1
Hardware: All
OS: Linux
medium
medium
Target Milestone: ovirt-3.6.1
: 3.6.1
Assignee: Simone Tiraboschi
QA Contact: Meni Yakove
URL:
Whiteboard: integration
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-05-25 17:14 UTC by Amador Pahim
Modified: 2016-01-21 14:50 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-11-17 12:35:13 UTC
oVirt Team: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
ovirt-hosted-engine-setup.log (249.12 KB, text/plain)
2015-05-25 17:17 UTC, Amador Pahim
no flags Details

Description Amador Pahim 2015-05-25 17:14:43 UTC
Description of problem:

Deploying the first hosted-engine host, after the management bridge creation, the default gateway is not restored. As consequence, in this case, the deploy fails due to the missing route to the NFS storage, which is in a different network.
This issue was observed when using a management bridge different from "rhevm", trough the option "OVEHOSTED_NETWORK/bridgeName=str:br0" in answer file.


Pre-deploy:


[root@host1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
BOOTPROTO="static"
DNS1=192.168.122.1
GATEWAY=192.168.122.1
IPADDR=192.168.122.207
IPV6INIT="no"
MTU="1500"
NETMASK="255.255.255.0"
NM_CONTROLLED="no"
ONBOOT="yes"
TYPE="Ethernet"


[root@host1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 eth0
0.0.0.0         192.168.122.1   0.0.0.0         UG    0      0        0 eth0

[root@host1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-br0
cat: /etc/sysconfig/network-scripts/ifcfg-br0: No such file or directory

[root@host1 ~]# brctl show
bridge name     bridge id               STP enabled     interfaces
[root@host1 ~]#


Deploy:

[root@host1 ~]# hosted-engine --deploy --config-append=/tmp/answers.conf 
...
          --== CONFIGURATION PREVIEW ==--

          Bridge interface                   : eth0
          Engine FQDN                        : hostedengine.pahim.org
          Bridge name                        : br0
          SSH daemon port                    : 22
          Firewall manager                   : iptables
          Gateway address                    : 192.168.122.1
          Host name for web application      : host1.pahim.org
          Host ID                            : 1
          Image alias                        : hosted_engine
          Image size GB                      : 25
          Storage connection                 : 192.168.25.118:/nfs/hostedengine
          Console type                       : vnc
          Memory size MB                     : 2048
          MAC address                        : 00:16:3e:78:23:42
          Boot type                          : cdrom
          Number of CPUs                     : 2
          ISO image (for cdrom boot)         : /tmp/RHEL6.4-20130130.0-Server-x86_64-DVD1.iso
          CPU Type                           : model_SandyBridge
[ INFO  ] Stage: Transaction setup
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Configuring libvirt
[ INFO  ] Configuring VDSM
[ INFO  ] Starting vdsmd
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Configuring the management bridge
[ ERROR ] Failed to execute stage 'Misc configuration': Connection to storage server failed
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150525135306.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
...


vdsm.log:

Thread-15::DEBUG::2015-05-25 13:53:00,567::BindingXMLRPC::1133::vds::(wrapper) client [127.0.0.1]::call setupNetworks with ({'br0': {'nic': 'eth0', 'netmask': '255.255.255.0', 'bootproto': 'static', 'ipaddr': '192.168.122.207', 'gateway': '192.168.122.1'}}, {}, {'connectivityCheck': False}) {}
Thread-15::DEBUG::2015-05-25 13:53:05,435::BindingXMLRPC::1140::vds::(wrapper) return setupNetworks with {'status': {'message': 'Done', 'code': 0}}

[root@host1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:bd:cd:e7 brd ff:ff:ff:ff:ff:ff
5: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
    link/ether ee:33:dc:74:04:52 brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN 
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
8: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 52:54:00:bd:cd:e7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.207/24 brd 192.168.122.255 scope global br0
    inet6 fe80::5054:ff:febd:cde7/64 scope link 
       valid_lft forever preferred_lft forever


[root@host1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 br0
169.254.0.0     0.0.0.0         255.255.0.0     U     1008   0        0 br0

[root@host1 ~]# cat /var/lib/vdsm/persistence/netconf/nets/br0 
{"nic": "eth0", "netmask": "255.255.255.0", "bootproto": "static", "ipaddr": "192.168.122.207", "gateway": "192.168.122.1"}

[root@host1 ~]# cat /var/run/vdsm/netconf/nets/br0 
{"nic": "eth0", "netmask": "255.255.255.0", "bootproto": "static", "ipaddr": "192.168.122.207", "gateway": "192.168.122.1"}

[root@host1 ~]# brctl show
bridge name     bridge id               STP enabled     interfaces
;vdsmdummy;             8000.000000000000       no
br0             8000.525400bdcde7       no              eth0


Version-Release number of selected component (if applicable):

ovirt-hosted-engine-setup-1.2.2-3.el6ev.noarch
ovirt-hosted-engine-ha-1.2.5-1.el6ev.noarch
vdsm-4.16.13.1-1.el6ev.x86_64


Additional info:

Attaching setup log file.

Comment 1 Amador Pahim 2015-05-25 17:17:33 UTC
Created attachment 1029545 [details]
ovirt-hosted-engine-setup.log

Comment 3 Yaniv Lavi 2015-06-03 10:58:16 UTC
Adding a custom named management network  is not supported in 3.5. It will be added in 3.6.0. Moving to VERIFY on 3.6.

Comment 8 Simone Tiraboschi 2015-09-29 15:29:55 UTC
Current status on 3.6 rc1 upstream:
The user can run hosted-engine-setup passing an answer-file with a custom name for the management bridge.
hosted-engine-setup will create the bridge with that name.
The default gateway is correctly set regardless of the management bridge name.

oVirt engine on the engine VM is by default trying to use a logical network called ovirtmgmt so it's then trying to bind it to a bridge called that way.
Cause it doesn't found it, the host will fail to come up.

Then hosted-engine-setup discovers that the host is in a non-operation state so it advises the user to manually fix it into the engine before retrying.

 [ INFO  ] Waiting for the host to become operational in the engine. This may take several minutes...
 [ INFO  ] Still waiting for VDSM host to become operational...
           The host hosted_engine_1 is in non-operational state.
           Please try to activate it via the engine webadmin UI.
           Retry checking host status or ignore this and continue (Retry, Ignore)[Retry]? retry

ovirt-hosted-engine-setup doesn't provide any helpful hint to the user on this point.

If the user manually:
- connects to the engine webui
- renames the management network for the selected datacenter
- manually activates the host
the host comes up in the engine and ovirt-hosted-engine-setup could continue.

Comment 9 Simone Tiraboschi 2015-09-29 15:37:12 UTC
Danken, if we know on hosted-engine side that the user want to use a custom name for its management network on a specific cluster, is there any API to pragmatically tell the engine to rename that management network to the desired value?

Comment 10 Yaniv Lavi 2015-10-20 11:53:43 UTC
Can you please check the status of this ticket? The customer ticket doesn't seem to be related to 3.6.

Comment 11 Sandro Bonazzola 2015-10-29 10:37:12 UTC
Removing the customer case since it's unrelated to 3.6

Comment 13 Yevgeny Zaspitsky 2015-11-03 13:26:50 UTC
Setting management network is possible on cluster level during cluster creating or later on in case the cluster is empty (no hosts attached to it) by updating the cluster.

When creating/updating a cluster through REST API "management_network" element could be supplied in order to specify the desired management network. Please note that the desired management network should be created in the DC prior creating/updating the cluster.

Here are the relevant URL's:
Create network: POST ovirt-engine/api/datacenters/{datacenter:id}/networks
Create cluster: POST ovirt-engine/api/datacenters/{datacenter:id}/clusters
Update cluster: PUT ovirt-engine/api/datacenters/{datacenter:id}/clusters/{cluster:id}

For the creating network URL "network" element should be supplied and for cluster related URL's "cluster" element.

You can obtain schema that defines the XML structure that should be supplied with the URL's by accessing ovirt-engine/api?schema and RSDL is available by ovirt-engine/api?rsdl URL.

Comment 14 Yaniv Lavi 2015-11-17 12:35:13 UTC

*** This bug has been marked as a duplicate of bug 1231799 ***


Note You need to log in before you can comment on or make changes to this bug.