Created attachment 1230733 [details] /var/log/*. Description of problem: After deploy HostedEngine successfully, the NetworkManager status is also active(running).But it should be disabled. Version-Release number of selected component (if applicable): redhat-virtualization-host-4.0-20161206.0 imgbased-0.8.11-0.1.el7ev.noarch selinux-policy-3.13.1-102.el7_3.7.noarch ovirt-hosted-engine-setup-2.0.4.1-2.el7ev.noarch ovirt-hosted-engine-ha-2.0.6-1.el7ev.noarch vdsm-4.18.18-1.el7ev.x86_64 rhevm-appliance-20161130.0-1.el7ev.ova How reproducible: 100% Steps to Reproduce: 1. Install RHVH4.0. 2. Login cockpit and add vlan successfully 3. Deploy HostedEngine via vlan tag(like em3.50) by cockpit 4. Check NetworkManager status with cmd "systemctl status NetworkManager" Actual results: After step3, the HostedEngine setup successfully After step4, the NetworkManager status is still active(running) Expected results: After step3, the HostedEngine setup successfully After step4, the NetworkManager status is disabled Additional info: After reboot rhvh,the NetworkManager status is inactive.
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.
Moving this to the hosted-engine side, because I can imagine that the issue here is as follows: NeztworkManager is getting dbus-activated because Cockpit is running and using dbus to speak to NM. If we want to avoid this, then NetworkManager needs to be masked, and not just disabled. This behavior is the same on RHVH and RHEL-H, and should thus probably be solved in hosted-engine or host-deploy.
Yihui, can you verify Fabian's suspicion? Could you make sure that all of Cockpit sessions are terminated before you add the host to RHV?
(In reply to Dan Kenigsberg from comment #4) > Yihui, can you verify Fabian's suspicion? Could you make sure that all of > Cockpit sessions are terminated before you add the host to RHV? Dan, I have a little confused, how could we deploy HE successful via cockpit but terminated cockpit sessions?
so you mean we do not use cockpit to setup HE? then run hosted-engine setup by command to check the NM status?
Yihui, could you please also attach host-deploy logs from /var/log/ovirt-engine/host-deploy/ on the engine VM? The point is that, AFAIK, in otopi we just have the support for systemctl disable but not for systemctl mask.
Following an internal discussion we are now considering moving this bug to otopi, and change its behavior as follows: 1. Plugin.services.startup(service, false) will also mask it 2. Plugin.services.startup(service, true) will also unmask it 3. Plugin.services.start(service, true) will also unmask it 4. Plugin.services.start(service, false) will not change mask state This will obviously apply only if using systemd. Other 'services' providers will not be changed. The main flow we could come up with, for which it's not clear if above is the right thing to do, is as follows: 1. Suppose that a user starts from a current version, in which we do not handle this. 2. User manually masks service X 3. Then user upgrades to a version that includes this change 4. And runs some action Y that requires overriding the past mask Should we then: 1. Fail, and let user fix and try again? 2. Automatically force the change as described above, thus overriding the user's decision? 3. Ask the user? It's not always possible (or easy) X can stand for NetworkManager (and suppose that some future version of e.g. vdsm will _require_ it), but also other services (say, vdsm), and Y can be e.g. host-deploy but also engine-setup. Sandro, what do you think?
(In reply to Yedidyah Bar David from comment #8) > Sandro, what do you think? Masking looks like a bit too risky for a backport to 4.0.7. We can consider to add it in 4.1 but we need to be sure it's properly tested against regression there. All services depending on the masked service will be affected and talking about NetworkManager, they're probably a lot.
Created attachment 1231018 [details] ovirt host deploy log on engine vm
(In reply to Simone Tiraboschi from comment #7) > Yihui, could you please also attach host-deploy logs from > /var/log/ovirt-engine/host-deploy/ on the engine VM? > > The point is that, AFAIK, in otopi we just have the support for systemctl > disable but not for systemctl mask. Hi Simone, the ovirt-host-deploy-log on engine vm is on attachment. https://bugzilla.redhat.com/attachment.cgi?id=1231018
host-deploy correctly stopped and disabled NetworkManager as expected: 2016-12-12 02:45:56 DEBUG otopi.plugins.otopi.services.systemd systemd.exists:73 check if service NetworkManager exists 2016-12-12 02:45:56 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:813 execute: ('/bin/systemctl', 'show', '-p', 'LoadState', 'NetworkManager.service'), executable='None', cwd='None', env=None 2016-12-12 02:45:56 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:863 execute-result: ('/bin/systemctl', 'show', '-p', 'LoadState', 'NetworkManager.service'), rc=0 2016-12-12 02:45:56 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:921 execute-output: ('/bin/systemctl', 'show', '-p', 'LoadState', 'NetworkManager.service') stdout: LoadState=loaded 2016-12-12 02:45:56 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:926 execute-output: ('/bin/systemctl', 'show', '-p', 'LoadState', 'NetworkManager.service') stderr: 2016-12-12 02:45:56 DEBUG otopi.plugins.otopi.services.systemd systemd.state:130 stopping service NetworkManager 2016-12-12 02:45:56 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:813 execute: ('/bin/systemctl', 'stop', 'NetworkManager.service'), executable='None', cwd='None', env=None 2016-12-12 02:45:56 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:863 execute-result: ('/bin/systemctl', 'stop', 'NetworkManager.service'), rc=0 2016-12-12 02:45:56 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:921 execute-output: ('/bin/systemctl', 'stop', 'NetworkManager.service') stdout: 2016-12-12 02:45:56 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:926 execute-output: ('/bin/systemctl', 'stop', 'NetworkManager.service') stderr: 2016-12-12 02:45:56 DEBUG otopi.plugins.otopi.services.systemd systemd.startup:99 set service NetworkManager startup to False 2016-12-12 02:45:56 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:813 execute: ('/bin/systemctl', 'show', '-p', 'Id', 'NetworkManager.service'), executable='None', cwd='None', env=None 2016-12-12 02:45:56 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:863 execute-result: ('/bin/systemctl', 'show', '-p', 'Id', 'NetworkManager.service'), rc=0 2016-12-12 02:45:56 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:921 execute-output: ('/bin/systemctl', 'show', '-p', 'Id', 'NetworkManager.service') stdout: Id=NetworkManager.service 2016-12-12 02:45:56 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:926 execute-output: ('/bin/systemctl', 'show', '-p', 'Id', 'NetworkManager.service') stderr: 2016-12-12 02:45:56 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:813 execute: ('/bin/systemctl', 'disable', u'NetworkManager.service'), executable='None', cwd='None', env=None 2016-12-12 02:45:57 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:863 execute-result: ('/bin/systemctl', 'disable', u'NetworkManager.service'), rc=0 2016-12-12 02:45:57 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:921 execute-output: ('/bin/systemctl', 'disable', u'NetworkManager.service') stdout: 2016-12-12 02:45:57 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:926 execute-output: ('/bin/systemctl', 'disable', u'NetworkManager.service') stderr: Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service. Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service. Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service. The point is that systemd disable is not strong enough and so a service could still be started by dbus invocation as for NetworkManager on this case.
This is the correct behavior in 4.1 right? We do not have issues other than the known bugs on having it side by side with VDSM?
(In reply to Yaniv Dary from comment #13) > This is the correct behavior in 4.1 right? We do not have issues other than > the known bugs on having it side by side with VDSM? Correct.