Bug 988075
Summary: | deploy fail if firewalld service cannot be started | ||||||
---|---|---|---|---|---|---|---|
Product: | [oVirt] otopi | Reporter: | Pavel Stehlik <pstehlik> | ||||
Component: | Plugins.network | Assignee: | Alon Bar-Lev <alonbl> | ||||
Status: | CLOSED CURRENTRELEASE | QA Contact: | Pavel Stehlik <pstehlik> | ||||
Severity: | high | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | master | CC: | acathrow, alonbl, bazulay, dougsland, ecohen, iheim, mgoldboi, sbonazzo, yeylon | ||||
Target Milestone: | --- | ||||||
Target Release: | 1.1.0 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | infra | ||||||
Fixed In Version: | is7 | Doc Type: | Bug Fix | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2013-09-23 07:32:03 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | Infra | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Yes, this firewalld like other 'new' components is very hard to interact with... to query its version we must start it... and from what I see from this log, it fails to start. I will just ignore firewalld if it fails to start. On second thought... I am unsure we should even use firewalld if it is down. (In reply to Alon Bar-Lev from comment #2) > On second thought... I am unsure we should even use firewalld if it is down. should we care if its configured to start at next boot rather than only if currently running? (In reply to Itamar Heim from comment #3) > (In reply to Alon Bar-Lev from comment #2) > > On second thought... I am unsure we should even use firewalld if it is down. > > should we care if its configured to start at next boot rather than only if > currently running? In case of these dbus deamons (systemd, firewalld, ...), apart from the dependencies it has one side effect - if you do not have service running you cannot communicate with it. this approach is very unwise, as for example, you cannot configure firewalld during rpm installation... closing as this should be in 3.3 (doing so in bulk, so may be incorrect) |
Created attachment 777865 [details] engine-ovirt.logs.tgz Description of problem: When adding new host - Advanced Parameters - Automatically configure host firewall is NOT checked, however otopi installer is trying to configure it anyway. Maybe matter of otopi, can't say where culprit is - insufficient log information. host-deploy.log ================ 2013-07-24 18:10:09 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:446 execute-output: ('/bin/systemctl', 'start', 'firewalld.service') stderr: Job for firewalld.service failed. See 'systemctl status firewalld.service' and 'journalctl -xn' for details. 2013-07-24 18:10:09 DEBUG otopi.context context._executeMethod:132 method exception Traceback (most recent call last): File "/tmp/ovirt-rW1nPzPpM3/pythonlib/otopi/context.py", line 122, in _executeMethod method['method']() File "/tmp/ovirt-rW1nPzPpM3/otopi-plugins/otopi/network/firewalld.py", line 159, in _customization self._firewalld_version = self._get_firewalld_cmd_version() Version-Release number of selected component (if applicable): ovirt-engine-3.3.0-0.3.beta1.fc19.noarch How reproducible: 100% Steps to Reproduce: 1. disable/uninst firewalld on host 2. add host via webadmin & don't let configure firewall on host 3. Actual results: Expected results: Additional info: engine.log: ================= 2013-07-24 18:10:08,810 ERROR [org.ovirt.engine.core.utils.ssh.SSHDialog] (pool-6-thread-50) SSH error running command root.66.71:'umask 0077; MYTMP="$(mktemp -t ovirt-XXXXXXXXXX)"; trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; rm -fr "${MYTMP}" && mkdir "${MYTMP}" && tar --warning=no-timestamp -C "${MYTMP}" -x && "${MYTMP}"/setup DIALOG/dialect=str:machine DIALOG/customization=bool:True': java.io.IOException: Command returned failure code 1 during SSH session 'root.66.71' at org.ovirt.engine.core.utils.ssh.SSHClient.executeCommand(SSHClient.java:507) [utils.jar:] at org.ovirt.engine.core.utils.ssh.SSHDialog.executeCommand(SSHDialog.java:311) [utils.jar:] at org.ovirt.engine.core.bll.VdsDeploy.execute(VdsDeploy.java:1028) [bll.jar:] at org.ovirt.engine.core.bll.InstallVdsCommand.installHost(InstallVdsCommand.java:167) [bll.jar:] at org.ovirt.engine.core.bll.InstallVdsCommand.executeCommand(InstallVdsCommand.java:98) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1128) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1213) [bll.jar:]