Bug 1396316 - Cannot find network 'ovirtmgmt'
Summary: Cannot find network 'ovirtmgmt'
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: Host-Deploy
Version: 4.0.5.5
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified vote
Target Milestone: ---
: ---
Assignee: Sandro Bonazzola
QA Contact: Pavel Stehlik
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-18 00:28 UTC by marcus young
Modified: 2016-11-21 13:49 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-21 12:10:43 UTC
oVirt Team: Network


Attachments (Terms of Use)
ss of console (729.96 KB, image/png)
2016-11-18 00:28 UTC, marcus young
no flags Details

Description marcus young 2016-11-18 00:28:26 UTC
Created attachment 1221683 [details]
ss of console

Description of problem:

After losing my VM host, I got a new server.
With oVirt 3 and my original install of oVirt 4.0.2 I did not have any issues.

I've tried 4.0.2, 4.0.4 and the most recent 4.0.5 and I cannot get
networking up at all.

The install goes without a hitch. Everything comes up fine, but when I
go to install the Host, it complains that it cannot find the network
'ovirtmgmt'.

It's there and "works" so I'm unsure what steps to take next.

Version-Release number of selected component (if applicable):


How reproducible:
Multiple installs through live-cd and from yum repository on centos 7

Steps to Reproduce:
1. Install via yum
2. Go to Hosts -> install host -> fqdn of host

Actual results:

Failed to find network 'ovirtmgmt'

Expected results:

Work and allow bridged networking

Additional info:

As much relevant info as I can think of:

[myoung@server ~]$ cat /etc/sysconfig/network-scripts/ifcfg-em1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=em1
UUID=a6beda1e-adf1-4c82-8c34-6d6d67ad5c16
DEVICE=em1
ONBOOT=yes
BRIDGE=ovirtmgmt
NM_CONTROLLED=no

[myoung@server ~]$ cat /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt
TYPE=Bridge
BOOTPROTO=static
IPADDR=192.168.2.125
GATEWAY=192.168.2.1
NETMASK=255.255.255.0
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
NAME=ovirtmgmt
DEVICE=ovirtmgmt
ONBOOT=yes
NM_CONTROLLED=no

[myoung@server ~]$ cat /etc/networks
default 0.0.0.0
loopback 127.0.0.0
link-local 169.254.0.0

[myoung@server ~]$ ip route
default via 192.168.2.1 dev ovirtmgmt
169.254.0.0/16 dev ovirtmgmt  scope link  metric 1004
192.168.2.0/24 dev ovirtmgmt  proto kernel  scope link  src 192.168.2.125

[myoung@server ~]$ ifconfig
em1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::5e26:aff:fe01:d547  prefixlen 64  scopeid 0x20<link>
        ether 5c:26:0a:01:d5:47  txqueuelen 1000  (Ethernet)
        RX packets 583962  bytes 764098535 (728.7 MiB)
        RX errors 0  dropped 217  overruns 0  frame 0
        TX packets 298384  bytes 44329990 (42.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 20  memory 0xe9600000-e9620000

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 398126  bytes 151285014 (144.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 398126  bytes 151285014 (144.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ovirtmgmt: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.2.125  netmask 255.255.255.0  broadcast 192.168.2.255
        inet6 2601:482:4300:5d:5e26:aff:fe01:d547  prefixlen 64
scopeid 0x0<global>
        inet6 fe80::5e26:aff:fe01:d547  prefixlen 64  scopeid 0x20<link>
        ether 5c:26:0a:01:d5:47  txqueuelen 0  (Ethernet)
        RX packets 535807  bytes 705309023 (672.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 267939  bytes 40855622 (38.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


[myoung@server ~]$ route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         router.asus.com 0.0.0.0         UG    0      0
0 ovirtmgmt
link-local      0.0.0.0         255.255.0.0     U     1004   0
0 ovirtmgmt
192.168.2.0     0.0.0.0         255.255.255.0   U     0      0
0 ovirtmgmt

[myoung@server ~]$ sudo systemctl status iptables
● iptables.service - IPv4 firewall with iptables
   Loaded: loaded (/usr/lib/systemd/system/iptables.service; enabled;
vendor preset: disabled)
   Active: inactive (dead) since Thu 2016-11-17 18:11:53 CST; 4min 33s ago
  Process: 28682 ExecStop=/usr/libexec/iptables/iptables.init stop
(code=exited, status=0/SUCCESS)
 Main PID: 21809 (code=exited, status=0/SUCCESS)


[myoung@server ~]$ sudo systemctl status NetworkManager
● NetworkManager.service - Network Manager
   Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service;
disabled; vendor preset: enabled)
   Active: inactive (dead) since Thu 2016-11-17 14:59:48 CST; 3h 16min ago
 Main PID: 5208 (code=exited, status=0/SUCCESS)


[myoung@server ~]$ sudo systemctl status network
● network.service - LSB: Bring up/down networking
   Loaded: loaded (/etc/rc.d/init.d/network)
   Active: active (exited) since Thu 2016-11-17 15:00:24 CST; 3h 16min ago
     Docs: man:systemd-sysv-generator(8)

[myoung@server ~]$ rpm -qa | grep -i ovirt
ovirt-engine-dwh-4.0.5-1.el7.centos.noarch
ovirt-vmconsole-proxy-1.0.4-1.el7.centos.noarch
ovirt-engine-userportal-4.0.5.5-1.el7.centos.noarch
ovirt-engine-extension-aaa-jdbc-1.1.1-1.el7.noarch
python-ovirt-engine-sdk4-4.0.2-1.el7.centos.x86_64
ovirt-iso-uploader-4.0.2-1.el7.centos.noarch
ovirt-imageio-proxy-0.4.0-0.201608310602.gita9b573b.el7.centos.noarch
ovirt-engine-tools-4.0.5.5-1.el7.centos.noarch
ovirt-engine-restapi-4.0.5.5-1.el7.centos.noarch
ovirt-engine-wildfly-10.1.0-1.el7.x86_64
ovirt-imageio-proxy-setup-0.4.0-0.201608310602.gita9b573b.el7.centos.noarch
ovirt-engine-websocket-proxy-4.0.5.5-1.el7.centos.noarch
ovirt-engine-webadmin-portal-4.0.5.5-1.el7.centos.noarch
ovirt-engine-vmconsole-proxy-helper-4.0.5.5-1.el7.centos.noarch
ovirt-engine-backend-4.0.5.5-1.el7.centos.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch
ovirt-engine-dwh-setup-4.0.5-1.el7.centos.noarch
ovirt-engine-setup-plugin-websocket-proxy-4.0.5.5-1.el7.centos.noarch
ovirt-engine-dashboard-1.0.5-1.el7.centos.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.0.5.5-1.el7.centos.noarch
ovirt-engine-dbscripts-4.0.5.5-1.el7.centos.noarch
ovirt-engine-extensions-api-impl-4.0.5.5-1.el7.centos.noarch
ovirt-host-deploy-1.5.3-1.el7.centos.noarch
ovirt-engine-lib-4.0.5.5-1.el7.centos.noarch
ovirt-imageio-daemon-0.4.0-1.el7.noarch
ovirt-engine-cli-3.6.8.1-1.el7.centos.noarch
ovirt-vmconsole-host-1.0.4-1.el7.centos.noarch
ovirt-host-deploy-java-1.5.3-1.el7.centos.noarch
ovirt-image-uploader-4.0.1-1.el7.centos.noarch
ovirt-engine-tools-backup-4.0.5.5-1.el7.centos.noarch
ovirt-engine-setup-4.0.5.5-1.el7.centos.noarch
ovirt-release40-4.0.5-2.noarch
ovirt-engine-4.0.5.5-1.el7.centos.noarch
ovirt-engine-setup-base-4.0.5.5-1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-4.0.5.5-1.el7.centos.noarch
ovirt-vmconsole-1.0.4-1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.0.5.5-1.el7.centos.noarch
ovirt-setup-lib-1.0.2-1.el7.centos.noarch
ovirt-imageio-common-0.4.0-1.el7.noarch
ovirt-engine-wildfly-overlay-10.0.0-1.el7.noarch

[root@server ~]# uname -a
Linux server 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015
x86_64 x86_64 x86_64 GNU/Linux

[root@server ~]# cat /etc/centos-release
CentOS Linux release 7.2.1511 (Core)

[root@server ~]# ping 192.168.2.1 -c 1
PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.
64 bytes from 192.168.2.1: icmp_seq=1 ttl=64 time=0.340 ms

--- 192.168.2.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms

[root@server ~]# ping 8.8.8.8 -c 1
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=46 time=20.1 ms

--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 20.164/20.164/20.164/0.000 ms

Comment 1 Dan Kenigsberg 2016-11-20 10:36:43 UTC
your ifcfg-ovirtmgmt file seems to be manually-written. do you have vdsm-ovirtmgmt listed in `virsh -r net-list`?

Defining a network outside vdsm is not recomended, but if you do it, you should do it completely. (e.g. cf. Bug 1301879 comment 1)

Can you share your /var/log/vdsm/supervdsm.log?

Comment 2 marcus young 2016-11-20 17:17:29 UTC
[myoung@server ~]$ virsh -r net-list
 Name                 State      Autostart     Persistent
----------------------------------------------------------


it's empty. so i created file with that xml and created it via virsh

[myoung@server ~]$ cat file
<network>
  <name>vdsm-ovirtmgmt</name>
  <uuid>34127d8a-0f80-4888-839a-bbfcf12339dc</uuid>
  <forward mode='bridge'/>
  <bridge name='ovirtmgmt'/>
</network>

[myoung@server ~]$ virsh net-create file
Network vdsm-ovirtmgmt created from file

[myoung@server ~]$ virsh -r net-list
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 vdsm-ovirtmgmt       active     no            no


Then I click 'activate' via UI and this is what /var/log/vdsm/supervdsm.log shows during that event:

MainProcess|jsonrpc.Executor/6::DEBUG::2016-11-20 11:05:34,725::supervdsmServer::92::SuperVdsm.ServerCallback::(wrapper) call network_caps with () {}
MainProcess|jsonrpc.Executor/6::DEBUG::2016-11-20 11:05:34,732::commands::68::root::(execCmd) /usr/bin/taskset --cpu-list 0-3 /usr/sbin/tc qdisc show (cwd None)
MainProcess|jsonrpc.Executor/6::DEBUG::2016-11-20 11:05:34,737::commands::86::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc.Executor/6::DEBUG::2016-11-20 11:05:34,738::vsctl::57::root::(commit) Executing commands: /usr/bin/ovs-vsctl --oneline --format=json -- list Bridge -- list Port -- list Interface
MainProcess|jsonrpc.Executor/6::DEBUG::2016-11-20 11:05:34,738::commands::68::root::(execCmd) /usr/bin/taskset --cpu-list 0-3 /usr/bin/ovs-vsctl --oneline --format=json -- list Bridge -- list Port -- list Interface (cwd None)
MainProcess|jsonrpc.Executor/6::DEBUG::2016-11-20 11:05:34,745::commands::86::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc.Executor/6::DEBUG::2016-11-20 11:05:34,747::supervdsmServer::99::SuperVdsm.ServerCallback::(wrapper) return network_caps with {'bridges': {'ovirtmgmt': {'ipv6autoconf': True, 'addr': '192.168.2.125', 'cfg': {'PEERROUTES': 'yes', 'NAME': 'ovirtmgmt', 'DEFROUTE': 'yes', 'IPADDR': '192.168.2.125', 'PEERDNS': 'yes', 'ONBOOT': 'yes', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'static', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'GATEWAY': '192.168.2.1'}, 'ipv6addrs':
['2601:482:4300:5d:5e26:aff:fe01:d547/64'], 'gateway': '192.168.2.1', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': True, 'stp': 'off', 'ipv4addrs': ['192.168.2.125/24'], 'mtu': '1500', 'ipv6gateway': 'fe80::12c3:7bff:fee1:d038', 'ports': ['em1
'], 'opts': {'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3125', 'hello_timer': '76', 'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected': '0', 'priority': '32768', 'multicast_membership_interval': '26000', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_st
artup_query_count': '2', 'nf_call_iptables': '0', 'topology_change': '0', 'hello_time': '200', 'root_id': '8000.5c260a01d547', 'bridge_id': '8000.5c260a01d547', 'topology_change_timer': '0', 'ageing_time': '30000', 'nf_call_ip6tables': '0', 'gc_timer':
'4991', 'nf_call_arptables': '0', 'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': '0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': '1500'
}}}, 'bondings': {}, 'nameservers': ['192.168.2.1'], 'nics': {'em1': {'ipv6gateway': '::', 'ipv6autoconf': True, 'addr': '', 'cfg': {'PEERROUTES': 'yes', 'BRIDGE': 'ovirtmgmt', 'IPV6INIT': 'yes', 'NM_CONTROLLED': 'no', 'NAME': 'em1', 'IPV6_PEERDNS': 'ye
s', 'DEFROUTE': 'yes', 'UUID': 'a6beda1e-adf1-4c82-8c34-6d6d67ad5c16', 'PEERDNS': 'yes', 'IPV4_FAILURE_FATAL': 'no', 'DEVICE': 'em1', 'BOOTPROTO': 'none', 'IPV6_DEFROUTE': 'yes', 'IPV6_AUTOCONF': 'yes', 'IPV6_FAILURE_FATAL': 'no', 'TYPE': 'Ethernet', 'O
NBOOT': 'yes', 'IPV6_PEERROUTES': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'hwaddr': '5c:26:0a:01:d5:47', 'speed': 1000, 'gateway': ''}}, 'supportsIPv6': True, 'vlans': {}, 'networks': {}}
MainProcess|jsonrpc.Executor/7::DEBUG::2016-11-20 11:05:35,768::supervdsmServer::92::SuperVdsm.ServerCallback::(wrapper) call getHardwareInfo with () {}
MainProcess|jsonrpc.Executor/7::DEBUG::2016-11-20 11:05:35,769::supervdsmServer::99::SuperVdsm.ServerCallback::(wrapper) return getHardwareInfo with {'systemProductName': 'Latitude E6510', 'systemUUID': '4C4C4544-004E-5610-8050-B9C04F314D31', 'systemSer
ialNumber': '9NVP1M1', 'systemVersion': '0001', 'systemManufacturer': 'Dell Inc.'}

Comment 3 marcus young 2016-11-20 19:51:46 UTC
You can close this. I ended up using the hosted-engine --deploy

I removed all my changes to em1 and ovirtmgmt (removing the latter) and doing 'engine-cleanup'

It failed because I had no idea how to set up the FQDN for the 2nd host, but it configured  the bridge.

My only thought: is it possible to have the networking stuff that's set up from hosted-engine be a part of 'engine-setup' ?

The docs here are pretty hard to understand (in general honestly) as I'm never sure if I'm looking at v3 or v4 docs, and the v4 docs seem to not take a 'from nowhere to somewhere' approach. I didn't even know hosted-engine was a thing and stumbled onto it.

Comment 4 Sandro Bonazzola 2016-11-21 12:10:43 UTC
(In reply to marcus young from comment #3)
> You can close this. I ended up using the hosted-engine --deploy

Closing this as not a bug as per comment #3

> I removed all my changes to em1 and ovirtmgmt (removing the latter) and
> doing 'engine-cleanup'
> 
> It failed because I had no idea how to set up the FQDN for the 2nd host, but
> it configured  the bridge.
> 
> My only thought: is it possible to have the networking stuff that's set up
> from hosted-engine be a part of 'engine-setup' ?
> 
> The docs here are pretty hard to understand (in general honestly) as I'm
> never sure if I'm looking at v3 or v4 docs, and the v4 docs seem to not take
> a 'from nowhere to somewhere' approach. I didn't even know hosted-engine was
> a thing and stumbled onto it.

We're working on the documentation, we know it's not in a good shape.

Comment 5 Derek 2016-11-21 13:33:50 UTC
May I ask which documents you were using?

Comment 6 marcus young 2016-11-21 13:49:33 UTC
Originally when I configured ovirt v3 I ran into issues and used this guide: https://jebpages.com/2012/02/15/how-to-get-up-and-running-with-ovirt/

That got me working until recently when I made the swap from 3 to 4.
With 4 I did what I remembered, basically just what was in that guide.

When I finally resolved this by using hosted-engine i used this guide: https://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4-0-and-gluster-storage/

I used this one because on the documentation tab of ovirt.org it's the only relevant one with the title 'Up and Running with oVirt 4.0 - by Jason Brooks'

My only issue with the guide is that it's not very descriptive of how to do the hosted-engine --deploy on *only* box i'm setting up. ie Host1.

My VM host is essentially an all-in-one for testing, so it's single node (192.168.2.125). It failed because it wouldn't allow me to install it to itself (192.168.2.125), I needed to run it against a second host, which I didn't have nor want. But it did configure the vdsm network correctly so after the script failed I was able to run 'engine-setup'

I still see a possible bug or area for improvement: either pulling out the part of hosted-engine that configures the network into its own setup script or allowing 'hosted-engine --deploy' to configure the host that it's running from. Does that make sense?


Note You need to log in before you can comment on or make changes to this bug.