Bug 1160423
Summary: | hosted-engine --deploy doesn't copy DNS config to ovirtmgmt | ||
---|---|---|---|
Product: | [oVirt] ovirt-hosted-engine-setup | Reporter: | rstory |
Component: | General | Assignee: | Simone Tiraboschi <stirabos> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Nikolai Sednev <nsednev> |
Severity: | medium | Docs Contact: | |
Priority: | high | ||
Version: | 1.2.1 | CC: | bazulay, biholcomb, bugs, cshao, danken, didi, fdeutsch, gklein, lsurette, lveyde, mavital, mburman, nsednev, rbalakri, rstory, sbonazzo, s.kieske, srevivo, stirabos, ykaul, ylavi |
Target Milestone: | ovirt-4.1.0-alpha | Keywords: | TestOnly, Triaged |
Target Release: | 2.1.0 | Flags: | stirabos:
needinfo-
rule-engine: ovirt-4.1+ rule-engine: blocker+ rule-engine: planning_ack+ sbonazzo: devel_ack+ mavital: testing_ack+ |
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
Cause:
On static IP configuration, ovirt-hosted-engine-setup wasn't copying DNS config to ovirtmgmt if entered as DNS1, DNS2 i nthe ifcfg script of the interface used for the bridge.
No issue if the DNS server was configured under /etc/resolv.conf
Consequence:
The system will loose name resolution capability once the management network went up.
Fix:
correctly copy DNS1 and DNS2 attributes to the management network
Result:
It works
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2017-02-01 14:47:00 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Integration | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1160667, 1326798, 1351095, 1358530 | ||
Bug Blocks: | 1304509 | ||
Attachments: |
Description
rstory
2014-11-04 18:47:15 UTC
Antoni, can you take a look? It is because we do not configure dns with static ip configurations in vdsm. The first obstacle is that we do not report dns configuration in the vds capabilities (except as part of ifcfg files in the 'cfg' field), which make it complicated to retrieve. The second obstacle is that it is not part of the API to setupNetworks. If everything was configured with ifcfg files, there could be a hack that would be to retrieve it from 'cfg' and pass it to the setupNetworks command using the legacy ifcfg option passthrough capability: 1.- Filter the 'cfg' keys of the device chosen for connection for those that start with 'DNS'. 2.- Add them to the vdscli setupNetworks command that hosted engine deploy sends them. The above would be necessary for having it work with the ifcfg configurator in 3.5. However, for 3.6 I'd rather add dns configuration to the setupNetworks API and also reporting in the caps. Re-targeting to 3.6 as per comment #2. Antoni, can you open a BZ on VDSM for changing the API and make it blocking this one? Sure. Without a working DNS configuration, host-deploy could not download the required packages to deploy the host and so the user cannot deploy hosted-engine. Increasing the severity. Reducing severity to medium since there's no critical data loss or system breakage caused by this bug. A proper network configuration is required for setting up the system. In order to get all the packages installed ovirt-host-deploy-offline can be installed before moving the host to the network without the proper DNS. Remind that offline setup is not something we're supporting in this case, but it should do the trick. You can also fix the dns configuration manually after the bridge creation and retry the setup. *** Bug 1222323 has been marked as a duplicate of this bug. *** Can you please try to recreate this on 3.6? Impossible until 1253939 is fixed. I tried to recreate this on 3.6 with rhel 7.2 and it seems that i can't recreate it, with the next steps--> 1) Clean rhel 7.2 2) Set static IP on 'eno1' interface and DNS1=<dns> via ifcfg-eno1 manually and restarted network service 3) WA for BZ-1253939 - route add default gw <gateway> 4) hosted-engine --deploy - PASS with success 5) cat /etc/resolv.conf > configured DNS remained "DNS1 set in ovirtmgmt config." ? is DNS1 should appear in ifcfg-ovirtmgmt after deployment? not clear from description above ^^ any way, DNS1 is not set on ifcfg-ovirtmgmt .. - Please review my steps and see if it works for you, thanks. Hi Yaniv, Please see comment12 provided by Michael, forth to it, bug not reproduced. There is a ask on you in comment #12. "Please review my steps and see if it works for you, thanks." Please reply. (In reply to Yaniv Dary from comment #14) > There is a ask on you in comment #12. > > "Please review my steps and see if it works for you, thanks." > > Please reply. The steps are OK for me and deployment works as expected, after WA of the 1253939 being implemented: -"3) WA for BZ-1253939 - route add default gw <gateway> ". Should be retested when dependencies resolved. This is an automated message. oVirt 3.6.0 RC1 has been released. This bug has no target release and still have target milestone set to 3.6.0-rc. Please review this bug and set target milestone and release to one of the next releases. (In reply to Nikolai Sednev from comment #15) > (In reply to Yaniv Dary from comment #14) > > There is a ask on you in comment #12. > > > > "Please review my steps and see if it works for you, thanks." > > > > Please reply. > > The steps are OK for me and deployment works as expected, after WA of the > 1253939 being implemented: -"3) WA for BZ-1253939 - route add default gw > <gateway> ". > > Should be retested when dependencies resolved. Moving to QA since bug #1253939 has moved to ON_QA I don't think we should consider this as ON_QA, what mburman tested is a different use case (probably the most typical one ant this is good but not the only one). The issue is that, as for bug 1160667, setupNetwork doesn't accept a dns attribute and so we are loosing for sure DNS1 value when, creating the bridge, we move from /etc/sysconfig/network-scripts/ifcfg-eth0 to /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt Indeed at the end we will have: [root@c72het20160127h1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 # Generated by VDSM version 4.17.21-0.el7.centos DEVICE=eth0 HWADDR=00:1a:4a:4f:bd:02 BRIDGE=ovirtmgmt ONBOOT=yes NM_CONTROLLED=no IPV6INIT=no [root@c72het20160127h1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt # Generated by VDSM version 4.17.21-0.el7.centos DEVICE=ovirtmgmt TYPE=Bridge DELAY=0 STP=off ONBOOT=yes BOOTPROTO=dhcp DEFROUTE=yes NM_CONTROLLED=no IPV6INIT=no HOTPLUG=no but /etc/resolv.conf remains untouched so if a DNS server was already configured there it will be still there but we are loosing it for sure in ifcfg-ovirtmgmt Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release. Hi Simone, Probably we'll need the RN (release notes) for this issue to be added to 3.6. Can you provide a status update on this one? AFAIK it's still blocked by 1160667 since we need an API for that. Additional information. This is on 3.6.4 release with hosted engine install. 1. /etc/resolv.conf does get changed by the bridge creation process. After the bridge is created the file has only comments and my DNS servers are both gone from the file. 2. If there is DNS information in the ifcfg file it needs to be copied over to the bridge ifcfg. I understand this is dependent on the API changes. In my opinion this is very serious since it causes the entire install to fail and since the deployment process is not real good at being able to recover it's wipe everything out (packages, files, etc.) and go through the process again. What's really bad is the failure occurs on the last step when it can't find the Engine VM. I've worked around it by editing the bridge ifcfg file and restarting the network but that's not a long term solution. I'd also suggest the part of the deployment could ask for DNS servers. And more information. I had a working install and added a logical network with VLAN 110. I then associated it with a physical NIC port. When I checked the ifcfg-ovirtmgmt still had the DNS in it but /etc/resolv.conf had no DNS entries anymore. I restarted the network and the DNS servers showed back up. I am running oVirt 3.6.4 release on Centos 7.2 (1511) on a hosted-engine deployment. I tried with 3.6.4 with static configuration under /etc/sysconfig/network-scripts/ifcfg-eth0 and DNS manually configured under /etc/resolv.conf and it worked as expected without loosing the DNS manually configured under /etc/resolv.conf biholcomb, are you sure that you also removed PEERDNS=yes from your /etc/sysconfig/network-scripts/ifcfg-eth0 ? Can you please share it? Yes, there was no PEERDNS in any of my ifcfg files. I use the DNS= entries. Before install I had this. The NIC is on VLAN 50. ifcfg-enp4s0f0 TYPE=ETHERNET NAME=enp4s0f0 DEVICE=enp4s0f0 ONBOOT=yes BOOTPROTO=none NM_CONTROLLED=no ifcfg-enp4s0f0.50 DEVICE=enp4s0f0.50 NAME=enp4s0f0.50 VLAN=yes ONBOOT=yes NM_CONTROLLED=no IPV6INIT=no HOTPLUG=no DELAY=0 STP=off IPADDR=10.0.50.10 PREFIX=255.255.255.0 GATEWAY=10.0.50.1 DNS1=10.0.100.13 DNS2=10.0.140.10 BOOTPROTO=none DEFROUTE=yes I've never used the PEERDNS function but on Centos 7 the /etc/resolv.conf always gets filled out. (In reply to biholcomb from comment #27) > I've never used the PEERDNS function but on Centos 7 the /etc/resolv.conf > always gets filled out. To clarify. On none of my Centos 7 systems have I used PEERDNS and it's not even in the files. When I start the system or run systemctl restart network it populates the /etc/resolv.conf file with the DNSx= entries in the ifcfg-* file. Moving from 4.0 alpha to 4.0 beta since 4.0 alpha has been already released and bug is not ON_QA. oVirt 4.0 beta has been released, moving to RC milestone. oVirt 4.0 beta has been released, moving to RC milestone. shaochen, can you see if the changes of https://bugzilla.redhat.com/show_bug.cgi?id=1361017#c9 make the DNS entry stay in /etc/resolv.conf? (In reply to Dan Kenigsberg from comment #33) > shaochen, can you see if the changes of > https://bugzilla.redhat.com/show_bug.cgi?id=1361017#c9 > make the DNS entry stay in /etc/resolv.conf? DNS entry stay in /etc/resolv.conf after deploy HE with static ip. Before deploy HE # systemctl restart NetworkManager [root@dell-op790-01 ~]# systemctl status NetworkManager ● NetworkManager.service - Network Manager Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2016-08-04 17:27:03 CST; 6s ago Main PID: 19765 (NetworkManager) CGroup: /system.slice/NetworkManager.service └─19765 /usr/sbin/NetworkManager --no-daemon [root@dell-op790-01 ~]# [root@dell-op790-01 ~]# cat /etc/resolv.conf # Generated by NetworkManager search qe.lab.eng.nay.redhat.com nameserver 10.73.2.107 # cat /etc/NetworkManager/conf.d/90-vdsm-monitor-connection-files.conf [main] monitor-connection-files=false After deploy HE # hosted-engine --vm-status /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py:15: DeprecationWarning: vdscli uses xmlrpc. since ovirt 3.6 xmlrpc is deprecated, please use vdsm.jsonrpcvdscli import vdsm.vdscli --== Host 1 status ==-- Status up-to-date : True Hostname : cshao790.redhat.com Host ID : 1 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 3400 stopped : False Local maintenance : False crc32 : 43442dfb Host timestamp : 2712 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=2712 (Thu Aug 4 18:00:53 2016) host-id=1 score=3400 maintenance=False state=EngineUp stopped=False # cat /etc/resolv.conf # Generated by NetworkManager search redhat.com nameserver 10.73.2.107 (In reply to shaochen from comment #34) > (In reply to Dan Kenigsberg from comment #33) > > shaochen, can you see if the changes of > > https://bugzilla.redhat.com/show_bug.cgi?id=1361017#c9 > > make the DNS entry stay in /etc/resolv.conf? > > DNS entry stay in /etc/resolv.conf after deploy HE with static ip. Test version: redhat-virtualization-host-4.0-20160727.1 ovirt-hosted-engine-setup-2.0.1.3-1.el7ev.noarch I just saw with danken that the DNS was removed once we added a rhvh-4.0-0.20160714.0+1 to rhev-m 4.0.2.4-0.1.el7ev. I configured interface with static ip and set DNS via cockpit-0.108-1.el7.x86_64 and added the host to rhev-m, after the host was added the name server was gone from /etc/resolv.conf - before adding to rhev-m [root@navy-vds1 ~]# cat /etc/resolv.conf # Generated by NetworkManager search qa.lab.tlv.redhat.com nameserver 10.35.64.1 - after adding to rhev-m [root@navy-vds1 etc]# cat /etc/resolv.conf # Generated by NetworkManager search qa.lab.tlv.redhat.com # No nameservers found; try putting DNS servers into your # ifcfg files in /etc/sysconfig/network-scripts like so: # # DNS1=xxx.xxx.xxx.xxx # DNS2=xxx.xxx.xxx.xxx # DOMAIN=lab.foo.com bar.foo.com NM can be told to not update resolv.conf by using the following configuration: [main] dns=none I know it's the last build for 4.0.2, but since we have a serious issue with DNS being dropped from /etc/resolv.conf by NetworkManager, I'm asking to disable NM today. ahh, sorry, I meant to move bug 1364126 to 4.0.2. Moving to 4.1 since it requires bug #1160667 to be fixed and it's targeted to 4.1. (In reply to Sandro Bonazzola from comment #40) > Moving to 4.1 since it requires bug #1160667 to be fixed and it's targeted > to 4.1. I don't think this bug depends on bug 1160667. You don't really need Engine-side control of host DNS. All that you need here is for Vdsm not to forget nameservers that are preconfigured in /etc/resolv.conf. This is going to be fixed by bug 1351095 in 4.0.4. Please double check - I think that this bug can go to MODIFIED right away. (In reply to Dan Kenigsberg from comment #41) > I don't think this bug depends on bug 1160667. You don't really need > Engine-side control of host DNS. All that you need here is for Vdsm not to > forget nameservers that are preconfigured in /etc/resolv.conf. This is going > to be fixed by bug 1351095 in 4.0.4. DNS entries configured at system level in /etc/resolv.conf shouldn't be an issue but the original request here explicitly asks '1. configure eth0 with static ip and set DNS1=8.8.8.8' which AFAIK currently it's still an issue since we are going to loose it creating the bridge. Dan see comment #42. How about adding step 1.5: "run `ifup eth0` after adding DNS1" that would make the dormant config available system-wide, and persisted to ovirtmgmt. I suspect that it would satisfy the original poster's request. Note that even bug 1160667 would not magically copy dormant DNS configuration from the ifcfg file. A user would have to explicitly set it via Engine API. Note that 4.0.4's vdsm already has the ability to set the 'nameservers' attribute on the management network, so if hosted-engine-setup would like to add such an argument, it could pass it on to Vdsm. Simone can you check if vdsm side changes are enough to unblock this bug? It has been fixed by patch https://gerrit.ovirt.org/#/c/61184/ (cherry picked on 4.0 as https://gerrit.ovirt.org/#/c/62360 ). Since with that patch VDSM is autonomously taking a value for nameservers from DNS1 and DNS2 on the selected nic: ipv4, ipv6, mtu, nameservers = self._getIfaceConfValues(nic) and if iface.nameservers is None: nameservers = [confParams[key] for key in ('DNS1', 'DNS2') if key in confParams] no further patches are required on hosted-engine-setup side. Verified with 4.18.999-712.git1ea95da on master: Before hosted engine deployment: [root@c72he20161011h1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 TYPE="Ethernet" BOOTPROTO="dhcp" DEFROUTE="yes" PEERDNS="yes" PEERROUTES="yes" IPV4_FAILURE_FATAL="no" IPV6INIT="yes" IPV6_AUTOCONF="yes" IPV6_DEFROUTE="yes" IPV6_PEERDNS="yes" IPV6_PEERROUTES="yes" IPV6_FAILURE_FATAL="no" NAME="eth0" UUID="d5caa99b-8fc3-4a4c-bb22-48eb2bd2b445" DEVICE="eth0" NM_CONTROLLED="no" ONBOOT="yes" BOOTPROTO=static IPADDR=192.168.1.12 NETMASK=255.255.255.0 GATEWAY=192.168.1.1 DNS1=192.168.1.1 DNS2=8.8.8.8 After hosted-engine deployment: [root@c72he20161011h1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 # Generated by VDSM version 4.18.999-712.git1ea95da.el7.centos DEVICE=eth0 BRIDGE=ovirtmgmt ONBOOT=yes MTU=1500 NM_CONTROLLED=no IPV6INIT=no [root@c72he20161011h1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt # Generated by VDSM version 4.18.999-712.git1ea95da.el7.centos DEVICE=ovirtmgmt TYPE=Bridge DELAY=0 STP=off ONBOOT=yes IPADDR=192.168.1.12 NETMASK=255.255.255.0 GATEWAY=192.168.1.1 BOOTPROTO=none MTU=1500 DEFROUTE=yes NM_CONTROLLED=no IPV6INIT=no DNS1=192.168.1.1 DNS2=8.8.8.8 [root@c72he20161011h1 ~]# cat /var/lib/vdsm/persistence/netconf/nets/ovirtmgmt { "ipv6autoconf": false, "nameservers": [ "192.168.1.1", "8.8.8.8" ], "nic": "eth0", "ipaddr": "192.168.1.12", "netmask": "255.255.255.0", "mtu": 1500, "switch": "legacy", "dhcpv6": false, "stp": false, "bridged": true, "gateway": "192.168.1.1", "defaultRoute": true } [root@c72he20161011h1 ~]# cat /etc/resolv.conf nameserver 192.168.1.1 nameserver 8.8.8.8 Moving back to modified since we haven't an official build for QE. 1)Checked cat /etc/resolv.conf on DHCP configured interface enp5s0f0 on clean el7.3 host: # Generated by NetworkManager search qa.lab.tlv.redhat.com nameserver 10.35.64.1 nameserver 10.35.255.6 2)Using NM I've manually configured static IP+GW+2 DNS servers. 3)Rebooted the host. 4)Checked again DNS configuration after restart: # cat /etc/resolv.conf # Generated by NetworkManager search qa.lab.tlv.redhat.com nameserver 10.35.64.1 nameserver 10.35.255.6 5)Checked that there is no bridged NIC yet created on host. 6)Started deployment of HE on host. 7)Deployment got stuck on stage [ INFO ] Configuring the management bridge 8)I've got disconnected from host and lost connectivity. Created attachment 1231196 [details]
Screenshot from 2016-12-13 15-20-37.png
Created attachment 1231197 [details]
Screenshot from 2016-12-13 15-24-25.png
Created attachment 1231204 [details]
Screenshot from 2016-12-13 15-31-56.png
Attachments from comments 49-51 were made after I was disconnected from host. NM was active and running on host. Created attachment 1231206 [details]
Screenshot from 2016-12-13 15-37-32.png
Created attachment 1231210 [details]
Screenshot from 2016-12-13 15-41-39.png
Via remote console I've seen on host that NetworkManager was still active, I had to fix the connectivity problem by adding these on host: ip route add 10.35.117.0/24 dev ovirtmgmt route add default gw 10.35.117.254 Then connectivity was eventually resored. Now attaching logs from host. Created attachment 1231230 [details]
sosreport from alma03
All the required info including the gateway and the DNS servers reached supervdsm that failed applying them: MainProcess|jsonrpc/2::DEBUG::2016-12-13 14:28:51,234::legacy_switch::461::root::(add_missing_networks) Adding network u'ovirtmgmt' MainProcess|jsonrpc/2::INFO::2016-12-13 14:28:51,235::netconfpersistence::58::root::(setNetwork) Adding network ovirtmgmt({'ipv6autoconf': False, 'nameservers': ['10.35.64.1', '10.35.255.6'], u'nic': u'enp5s0f0', u'ipaddr': u'10.35.117.24', u'netmask': u'255.255.255.255', 'mtu': 1500, 'switch': 'legacy', 'dhcpv6': False, 'stp': False, 'bridged': True, u'gateway': u'10.35.117.254', u'defaultRoute': True}) MainProcess|jsonrpc/2::DEBUG::2016-12-13 14:28:51,235::legacy_switch::204::root::(_add_network) Validating network... MainProcess|jsonrpc/2::INFO::2016-12-13 14:28:51,235::legacy_switch::215::root::(_add_network) Adding network ovirtmgmt with vlan=None, bonding=None, nic=enp5s0f0, mtu=1500, bridged=True, defaultRoute=True, options={'switch': 'legacy', 'stp': False} MainProcess|jsonrpc/2::INFO::2016-12-13 14:28:51,236::legacy_switch::242::root::(_add_network) Configuring device ovirtmgmt MainProcess|jsonrpc/2::DEBUG::2016-12-13 14:28:51,242::ifcfg::536::root::(_persistentBackup) backing up ifcfg-ovirtmgmt: # original file did not exist MainProcess|jsonrpc/2::DEBUG::2016-12-13 14:28:51,242::ifcfg::441::root::(writeBackupFile) Persistently backed up /var/lib/vdsm/netconfback/ifcfg-ovirtmgmt (until next 'set safe config') MainProcess|jsonrpc/2::DEBUG::2016-12-13 14:28:51,243::ifcfg::600::root::(writeConfFile) Writing to file /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt configuration: # Generated by VDSM version 4.18.999-1020.git1ff41b1.el7.centos DEVICE=ovirtmgmt TYPE=Bridge DELAY=0 STP=off ONBOOT=yes IPADDR=10.35.117.24 NETMASK=255.255.255.255 GATEWAY=10.35.117.254 BOOTPROTO=none MTU=1500 DEFROUTE=yes NM_CONTROLLED=no IPV6INIT=no DNS1=10.35.64.1 DNS2=10.35.255.6 MainProcess|jsonrpc/2::DEBUG::2016-12-13 14:28:51,247::commands::69::root::(execCmd) /usr/bin/taskset --cpu-list 0-3 /usr/sbin/ifdown ovirtmgmt (cwd None) MainProcess|jsonrpc/2::DEBUG::2016-12-13 14:28:52,319::commands::93::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 MainProcess|jsonrpc/2::DEBUG::2016-12-13 14:28:52,344::commands::69::root::(execCmd) /usr/bin/taskset --cpu-list 0-3 /sbin/ip -4 addr flush dev enp5s0f0 scope global (cwd None) MainProcess|jsonrpc/2::DEBUG::2016-12-13 14:28:52,348::commands::93::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 MainProcess|jsonrpc/2::DEBUG::2016-12-13 14:28:52,349::ifcfg::499::root::(_atomicBackup) Backed up /etc/sysconfig/network-scripts/ifcfg-enp5s0f0 MainProcess|jsonrpc/2::DEBUG::2016-12-13 14:28:52,349::ifcfg::536::root::(_persistentBackup) backing up ifcfg-enp5s0f0: # Generated by parse-kickstart IPV6INIT="yes" BOOTPROTO=none DEVICE="enp5s0f0" ONBOOT="yes" UUID="c412e8f2-6475-4c9e-bfc9-ee1a0a622f85" TYPE=Ethernet DNS1=10.35.64.1 DNS2=10.35.255.6 DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no NAME="System enp5s0f0" IPADDR=10.35.117.24 PREFIX=32 GATEWAY=10.35.117.254 IPV6_PEERDNS=yes IPV6_PEERROUTES=yes MainProcess|jsonrpc/2::DEBUG::2016-12-13 14:28:52,350::ifcfg::441::root::(writeBackupFile) Persistently backed up /var/lib/vdsm/netconfback/ifcfg-enp5s0f0 (until next 'set safe config') MainProcess|jsonrpc/2::DEBUG::2016-12-13 14:28:52,350::ifcfg::600::root::(writeConfFile) Writing to file /etc/sysconfig/network-scripts/ifcfg-enp5s0f0 configuration: # Generated by VDSM version 4.18.999-1020.git1ff41b1.el7.centos DEVICE=enp5s0f0 BRIDGE=ovirtmgmt ONBOOT=yes MTU=1500 DEFROUTE=no NM_CONTROLLED=no IPV6INIT=no This bug had requires_doc_text flag, yet no documentation text was provided. Please add the documentation text and only then set this flag. (In reply to Simone Tiraboschi from comment #57) > All the required info including the gateway and the DNS servers reached > supervdsm that failed applying them: Simone, could you elaborate on that? I do not see a failure in your log extract, and ifcfg-ovirtmgmt seems fine, with DNS and GATEWAY properly set. > # Generated by VDSM version 4.18.999-1020.git1ff41b1.el7.centos > DEVICE=ovirtmgmt > TYPE=Bridge > DELAY=0 > STP=off > ONBOOT=yes > IPADDR=10.35.117.24 > NETMASK=255.255.255.255 > GATEWAY=10.35.117.254 > BOOTPROTO=none > MTU=1500 > DEFROUTE=yes > NM_CONTROLLED=no > IPV6INIT=no > DNS1=10.35.64.1 > DNS2=10.35.255.6 Nikolay, Vdsm accepts DNS args for a while now. Do you see a problem when NM is stopped on deployment? On another occasion? (In reply to Dan Kenigsberg from comment #61) > (In reply to Simone Tiraboschi from comment #57) > > All the required info including the gateway and the DNS servers reached > > supervdsm that failed applying them: > > Simone, could you elaborate on that? I do not see a failure in your log > extract, and ifcfg-ovirtmgmt seems fine, with DNS and GATEWAY properly set. > > > # Generated by VDSM version 4.18.999-1020.git1ff41b1.el7.centos > > DEVICE=ovirtmgmt > > TYPE=Bridge > > DELAY=0 > > STP=off > > ONBOOT=yes > > IPADDR=10.35.117.24 > > NETMASK=255.255.255.255 > > GATEWAY=10.35.117.254 > > BOOTPROTO=none > > MTU=1500 > > DEFROUTE=yes > > NM_CONTROLLED=no > > IPV6INIT=no > > DNS1=10.35.64.1 > > DNS2=10.35.255.6 > > > Nikolay, Vdsm accepts DNS args for a while now. Do you see a problem when NM > is stopped on deployment? On another occasion? I did not seen any problems with deployment on DHCP configured hosts, I also tried to deploy on DHCP configured hosts with NM disabled and enabled, in both ways the deployment was successful. Regarding manually configured physical interface over NMTUI, I've failed to get ovirtmgmt bridge configured as described in this bug. Nikolay, configuring an interface via NM is not yet supported. Doing so via nmtui (instead of cockpit) is unlikely to be supported any time soon. If dns is copied to ovirtmgmt when interfaces are manually defined via ifcfg, the bug should be verified. You are most welcome to open a fresh bug regarding NM-configured iterfaces, but it should be blocked by bug 1326798. Nikolay, do you see a problem when NM is never used on deployment? (In reply to Dan Kenigsberg from comment #65) > Nikolay, do you see a problem when NM is never used on deployment? Sorry, I don't understand your question. May you please rephrase it? If you mean that on NM never used, but host is configured to use manual IP configuration using shell and /etc/sysconfig/networkscripts, then I do not know the answer, as I'm not using such manual configurations. In my reproduction I've used NM for initial configuration of host to use static IP addressing and proper DNS servers. Please see what I've done in https://bugzilla.redhat.com/show_bug.cgi?id=1160423#c48 for any additional details. If you prefer to test this scenario using manual configuration without NM, then this will be another test flow, which I did not tested. (In reply to Nikolai Sednev from comment #66) > Sorry, I don't understand your question. May you please rephrase it? > If you mean that on NM never used, but host is configured to use manual IP > configuration using shell and /etc/sysconfig/networkscripts, then I do not > know the answer, as I'm not using such manual configurations. Defining interfaces via NM is not yet supported. It would be only when bug 1326798 is solved. > If you prefer to test this scenario using manual configuration without NM, > then this will be another test flow, which I did not tested. Please do! (In reply to Dan Kenigsberg from comment #67) > (In reply to Nikolai Sednev from comment #66) > > > Sorry, I don't understand your question. May you please rephrase it? > > If you mean that on NM never used, but host is configured to use manual IP > > configuration using shell and /etc/sysconfig/networkscripts, then I do not > > know the answer, as I'm not using such manual configurations. > > Defining interfaces via NM is not yet supported. It would be only when bug > 1326798 is solved. > > > If you prefer to test this scenario using manual configuration without NM, > > then this will be another test flow, which I did not tested. > > Please do! Works perfectly with manually performed static IP configuration on fresh RHEL7.3 host with turned off NetworkManager. I've tried this on these components on host: ovirt-vmconsole-host-1.0.4-1.el7ev.noarch mom-0.5.8-1.el7ev.noarch ovirt-hosted-engine-setup-2.1.0-0.0.master.20161221071755.git46cacd3.el7.centos.noarch ovirt-setup-lib-1.1.0-1.el7.centos.noarch libvirt-client-2.0.0-10.el7_3.2.x86_64 ovirt-release41-pre-4.1.0-0.6.beta2.20161221025826.gitc487776.el7.centos.noarch ovirt-vmconsole-1.0.4-1.el7ev.noarch qemu-kvm-rhev-2.6.0-28.el7_3.2.x86_64 ovirt-hosted-engine-ha-2.1.0-0.0.master.20161221070856.20161221070854.git387fa53.el7.centos.noarch sanlock-3.4.0-1.el7.x86_64 ovirt-host-deploy-1.6.0-0.0.master.20161215101008.gitb76ad50.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7ev.noarch ovirt-imageio-common-0.5.0-0.201611201242.gitb02532b.el7.centos.noarch vdsm-4.18.999-1218.gitd36143e.el7.centos.x86_64 ovirt-imageio-daemon-0.5.0-0.201611201242.gitb02532b.el7.centos.noarch Linux version 3.10.0-514.2.2.el7.x86_64 (mockbuild.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) ) #1 SMP Wed Nov 16 13:15:13 EST 2016 Linux 3.10.0-514.2.2.el7.x86_64 #1 SMP Wed Nov 16 13:15:13 EST 2016 x86_64 x86_64 x86_64 GNU/Linux Red Hat Enterprise Linux Server release 7.3 (Maipo) Components on engine: ovirt-engine-setup-plugin-ovirt-engine-common-4.1.0-0.3.beta2.20161221085908.el7.centos.noarch ovirt-imageio-proxy-0.5.0-0.201611201242.gitb02532b.el7.centos.noarch ovirt-iso-uploader-4.1.0-0.0.master.20160909154152.git14502bd.el7.centos.noarch ovirt-engine-userportal-4.1.0-0.3.beta2.20161221085908.el7.centos.noarch ovirt-engine-dbscripts-4.1.0-0.3.beta2.20161221085908.el7.centos.noarch ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.1.0-0.3.beta2.20161221085908.el7.centos.noarch ovirt-engine-extensions-api-impl-4.1.0-0.3.beta2.20161221085908.el7.centos.noarch ovirt-imageio-common-0.5.0-0.201611201242.gitb02532b.el7.centos.noarch ovirt-host-deploy-1.6.0-0.0.master.20161215101008.gitb76ad50.el7.centos.noarch python-ovirt-engine-sdk4-4.1.0-0.1.a0.20161215git77fce51.el7.centos.x86_64 ovirt-host-deploy-java-1.6.0-0.0.master.20161215101008.gitb76ad50.el7.centos.noarch ovirt-release41-pre-4.1.0-0.6.beta2.20161221025826.gitc487776.el7.centos.noarch ovirt-setup-lib-1.1.0-1.el7.centos.noarch ovirt-engine-extension-aaa-jdbc-1.1.2-1.el7.noarch ovirt-engine-dwh-setup-4.1.0-0.0.master.20161129154019.el7.centos.noarch ovirt-imageio-proxy-setup-0.5.0-0.201611201242.gitb02532b.el7.centos.noarch ovirt-engine-tools-backup-4.1.0-0.3.beta2.20161221085908.el7.centos.noarch ovirt-engine-websocket-proxy-4.1.0-0.3.beta2.20161221085908.el7.centos.noarch ovirt-engine-setup-4.1.0-0.3.beta2.20161221085908.el7.centos.noarch ovirt-engine-backend-4.1.0-0.3.beta2.20161221085908.el7.centos.noarch ovirt-engine-tools-4.1.0-0.3.beta2.20161221085908.el7.centos.noarch ovirt-engine-webadmin-portal-4.1.0-0.3.beta2.20161221085908.el7.centos.noarch ovirt-engine-restapi-4.1.0-0.3.beta2.20161221085908.el7.centos.noarch ovirt-engine-vmconsole-proxy-helper-4.1.0-0.3.beta2.20161221085908.el7.centos.noarch ovirt-engine-setup-plugin-ovirt-engine-4.1.0-0.3.beta2.20161221085908.el7.centos.noarch ovirt-engine-wildfly-overlay-10.0.0-1.el7.noarch ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch ovirt-web-ui-0.1.1-2.el7.centos.x86_64 ovirt-engine-setup-base-4.1.0-0.3.beta2.20161221085908.el7.centos.noarch ovirt-vmconsole-1.0.4-1.el7.centos.noarch ovirt-engine-dwh-4.1.0-0.0.master.20161129154019.el7.centos.noarch ovirt-engine-setup-plugin-websocket-proxy-4.1.0-0.3.beta2.20161221085908.el7.centos.noarch ovirt-engine-hosts-ansible-inventory-4.1.0-0.3.beta2.20161221085908.el7.centos.noarch ovirt-engine-dashboard-1.1.0-0.4.20161128git5ed6f96.el7.centos.noarch ovirt-engine-4.1.0-0.3.beta2.20161221085908.el7.centos.noarch ovirt-guest-agent-common-1.0.13-1.20161220085008.git165fff1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch ovirt-engine-wildfly-10.1.0-1.el7.x86_64 ovirt-engine-lib-4.1.0-0.3.beta2.20161221085908.el7.centos.noarch ovirt-vmconsole-proxy-1.0.4-1.el7.centos.noarch Linux version 3.10.0-514.2.2.el7.x86_64 (builder.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) ) #1 SMP Tue Dec 6 23:06:41 UTC 2016 Linux 3.10.0-514.2.2.el7.x86_64 #1 SMP Tue Dec 6 23:06:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux CentOS Linux release 7.3.1611 (Core) 1.I've reprovisioned a fresh host. 2.Disabled NetworkManager on host. 3.Configured static IP on host. 4.Restarted network service. 5.Installed ovirt-hosted-engine-setup package on host. 6.Added additional host via WEBADMIN and it became active with positive HA score. |