Created attachment 1120905 [details] iptables -nvL Description of problem: During initial installation of oVirt Hosted Engine using the appliance and answer file, if firewalld is selected as the OVEHOSTED_NETWORK/firewallManager (e.g. OVEHOSTED_NETWORK/firewallManager=str:firewalld), addition of the initial oVirt host fails. Version-Release number of selected component (if applicable): glusterfs-3.7.6-1.el7.x86_64 glusterfs-api-3.7.6-1.el7.x86_64 glusterfs-cli-3.7.6-1.el7.x86_64 glusterfs-client-xlators-3.7.6-1.el7.x86_64 glusterfs-fuse-3.7.6-1.el7.x86_64 glusterfs-geo-replication-3.7.6-1.el7.x86_64 glusterfs-libs-3.7.6-1.el7.x86_64 glusterfs-server-3.7.6-1.el7.x86_64 libgovirt-0.3.3-1.el7.x86_64 ovirt-engine-appliance-3.6-20160126.1.el7.centos.noarch ovirt-engine-sdk-python-3.6.2.1-1.el7.centos.noarch ovirt-host-deploy-1.4.1-1.el7.centos.noarch ovirt-hosted-engine-ha-1.3.3.7-1.el7.centos.noarch ovirt-hosted-engine-setup-1.3.2.3-1.el7.centos.noarch ovirt-setup-lib-1.0.1-1.el7.centos.noarch ovirt-vmconsole-1.0.0-1.el7.centos.noarch ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch vdsm-4.17.18-0.el7.centos.noarch vdsm-cli-4.17.18-0.el7.centos.noarch vdsm-gluster-4.17.18-0.el7.centos.noarch vdsm-hook-vmfex-dev-4.17.18-0.el7.centos.noarch vdsm-infra-4.17.18-0.el7.centos.noarch vdsm-jsonrpc-4.17.18-0.el7.centos.noarch vdsm-python-4.17.18-0.el7.centos.noarch vdsm-xmlrpc-4.17.18-0.el7.centos.noarch vdsm-yajsonrpc-4.17.18-0.el7.centos.noarch How reproducible: Everytime Steps to Reproduce: 1. Install oVirt appliance 2. Once oVirt HE successfully installed on initial node, begin install on freshly imaged (CentOS 7.2) additional (second) node 3. On second node: systemctl stop firewalld; setenforce 0; yum install -y ovirt-hosted-engine-setup; hosted-engine --deploy Actual results: 1. [ERROR] Failed to execute stage 'Closing up': VDSM did not start within 120 seconds. 2. [ERROR] Hosted Engine deployment failed: this system is not reliable, please check the issue, fix and redeploy. Expected results: 1. Additional oVirt host is added to pool successfully. Additional info: SELinux permissive mode
Created attachment 1120907 [details] /var/log/vdsm/mom.log hosted-engine --deploy additional host installation fails when using iptables
Created attachment 1120908 [details] /var/log/vdsm/vdsm.log hosted-engine --deploy additional host installation fails when using iptables
Created attachment 1120909 [details] /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup.log hosted-engine --deploy additional host installation fails when using iptables
Comment on attachment 1120905 [details] iptables -nvL view of iptables rules after the setup fails/exits
(In reply to Charlie Inglese from comment #0) > Created attachment 1120905 [details] > iptables -nvL > > Description of problem: > During initial installation of oVirt Hosted Engine using the appliance and > answer file, if firewalld is selected as the > OVEHOSTED_NETWORK/firewallManager (e.g. > OVEHOSTED_NETWORK/firewallManager=str:firewalld), addition of the initial > oVirt host fails. > > Version-Release number of selected component (if applicable): > glusterfs-3.7.6-1.el7.x86_64 > glusterfs-api-3.7.6-1.el7.x86_64 > glusterfs-cli-3.7.6-1.el7.x86_64 > glusterfs-client-xlators-3.7.6-1.el7.x86_64 > glusterfs-fuse-3.7.6-1.el7.x86_64 > glusterfs-geo-replication-3.7.6-1.el7.x86_64 > glusterfs-libs-3.7.6-1.el7.x86_64 > glusterfs-server-3.7.6-1.el7.x86_64 > libgovirt-0.3.3-1.el7.x86_64 > ovirt-engine-appliance-3.6-20160126.1.el7.centos.noarch > ovirt-engine-sdk-python-3.6.2.1-1.el7.centos.noarch > ovirt-host-deploy-1.4.1-1.el7.centos.noarch > ovirt-hosted-engine-ha-1.3.3.7-1.el7.centos.noarch > ovirt-hosted-engine-setup-1.3.2.3-1.el7.centos.noarch > ovirt-setup-lib-1.0.1-1.el7.centos.noarch > ovirt-vmconsole-1.0.0-1.el7.centos.noarch > ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch > vdsm-4.17.18-0.el7.centos.noarch > vdsm-cli-4.17.18-0.el7.centos.noarch > vdsm-gluster-4.17.18-0.el7.centos.noarch > vdsm-hook-vmfex-dev-4.17.18-0.el7.centos.noarch > vdsm-infra-4.17.18-0.el7.centos.noarch > vdsm-jsonrpc-4.17.18-0.el7.centos.noarch > vdsm-python-4.17.18-0.el7.centos.noarch > vdsm-xmlrpc-4.17.18-0.el7.centos.noarch > vdsm-yajsonrpc-4.17.18-0.el7.centos.noarch > > > How reproducible: > Everytime > > Steps to Reproduce: > 1. Install oVirt appliance > 2. Once oVirt HE successfully installed on initial node, begin install on > freshly imaged (CentOS 7.2) additional (second) node > 3. On second node: systemctl stop firewalld; setenforce 0; yum install -y > ovirt-hosted-engine-setup; hosted-engine --deploy > > Actual results: > 1. [ERROR] Failed to execute stage 'Closing up': VDSM did not start within > 120 seconds. > 2. [ERROR] Hosted Engine deployment failed: this system is not reliable, > please check the issue, fix and redeploy. > > Expected results: > 1. Additional oVirt host is added to pool successfully. > > Additional info: > SELinux permissive mode Description of problem: Installation of additional oVirt HE node fails because vdsm connection refused due to iptables configuration.
(In reply to Charlie Inglese from comment #0) > Created attachment 1120905 [details] > iptables -nvL > > Description of problem: > During initial installation of oVirt Hosted Engine using the appliance and > answer file, if firewalld is selected as the > OVEHOSTED_NETWORK/firewallManager (e.g. > OVEHOSTED_NETWORK/firewallManager=str:firewalld), addition of the initial > oVirt host fails. Summary says iptables, here you mention firewalld. Which? Isn't this a duplicate of bug 1304445?
After more troubleshooting, it appears that this isn't iptables related. When using GlusterFS (OVESETUP_CONFIG/applicationMode=both), the additional nodes require gluster RPMs to be installed prior to oVirt setup being executed. I'm attaching the ovirt-host-deploy log from ovirt-engine showing the error being encountered. 2016-02-04 08:30:44 DEBUG otopi.context context._executeMethod:142 Stage closeup METHOD otopi.plugins.ovirt_host_deploy.gluster.packages.Plugin._closeup 2016-02-04 08:30:44 INFO otopi.plugins.ovirt_host_deploy.gluster.packages packages._closeup:92 Starting gluster 2016-02-04 08:30:44 DEBUG otopi.plugins.otopi.services.systemd systemd.state:145 stopping service glusterd 2016-02-04 08:30:44 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:828 execute: ('/bin/systemctl', 'stop', 'glusterd.service'), executable='None', cwd='None', env=None 2016-02-04 08:30:44 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:878 execute-result: ('/bin/systemctl', 'stop', 'glusterd.service'), rc=5 2016-02-04 08:30:44 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:936 execute-output: ('/bin/systemctl', 'stop', 'glusterd.service') stdout: 2016-02-04 08:30:44 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:941 execute-output: ('/bin/systemctl', 'stop', 'glusterd.service') stderr: Failed to stop glusterd.service: Unit glusterd.service not loaded. 2016-02-04 08:30:44 DEBUG otopi.context context._executeMethod:156 method exception Traceback (most recent call last): File "/tmp/ovirt-F0vOHFRmv9/pythonlib/otopi/context.py", line 146, in _executeMethod method['method']() File "/tmp/ovirt-F0vOHFRmv9/otopi-plugins/ovirt-host-deploy/gluster/packages.py", line 94, in _closeup self.services.state('glusterd', state) File "/tmp/ovirt-F0vOHFRmv9/otopi-plugins/otopi/services/systemd.py", line 156, in state service=name, RuntimeError: Failed to stop service 'glusterd' 2016-02-04 08:30:44 ERROR otopi.context context._executeMethod:165 Failed to execute stage 'Closing up': Failed to stop service 'glusterd'
Created attachment 1121183 [details] ovirt-host-deploy log