Description of problem: ----------------------- glusterd port was not opened after automatically configuring firewall Version-Release number of selected component (if applicable): ------------------------------------------------------------- RHEV 3.6 ovirt-host-deploy-1.4.0-1.el7ev.noarch ovirt-hosted-engine-ha-1.3.2.1-1.el7ev.noarch How reproducible: ----------------- Always Steps to Reproduce: ------------------- 1. Deploy the hosted engine on a host with automatic/default firewall configuration 2. Check whether glusterd port ( 24007 ) is opened Actual results: --------------- glusterd port not opened by default Expected results: ----------------- glusterd port should be opened Additional info: ---------------- glusterfs bricks ports are opened Current rules available, # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:54321 ACCEPT tcp -- anywhere anywhere tcp dpt:sunrpc ACCEPT udp -- anywhere anywhere udp dpt:sunrpc ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT udp -- anywhere anywhere udp dpt:snmp ACCEPT tcp -- anywhere anywhere tcp dpt:16514 ACCEPT tcp -- anywhere anywhere multiport dports rockwell-csp2 ACCEPT tcp -- anywhere anywhere multiport dports rfb:6923 ACCEPT tcp -- anywhere anywhere multiport dports 49152:49216 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- anywhere anywhere PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination
In HC mode, gluster firewall rules also need to be applied - currently only virt rules are applied.
Workaround: avoid having hosted-engine-setup configuring iptables for you and manually do it responding no to: iptables was detected on your computer, do you wish setup to configure it? (Yes, No)[Yes]:
Also should handle non-standard Gluster port range where Gluster is used in a hyperconverged setup and the user follows the oVirt docs direction[1] of: “By default gluster uses a port that vdsm also wants, so we need to change base-port setting avoiding the clash between the two daemons. We need to add option base-port 49217 to /etc/glusterfs/glusterd.vol and ensure glusterd service is enabled and started before proceeding.” [1] http://www.ovirt.org/Features/Self_Hosted_Engine_Hyper_Converged_Gluster_Support
can be done via cloud-init script. Simone, can you help with pointers on how to do this?
(In reply to Sahina Bose from comment #4) > can be done via cloud-init script. Simone, can you help with pointers on how > to do this? No, cloud-init it's used to provide the initial configuration of the engine VM while here the issue is on the initial configuration of the firewall on the host so we need a patch on hosted-engine-setup.
(In reply to Simone Tiraboschi from comment #5) > (In reply to Sahina Bose from comment #4) > > can be done via cloud-init script. Simone, can you help with pointers on how > > to do this? > > No, cloud-init it's used to provide the initial configuration of the engine > VM while here the issue is on the initial configuration of the firewall on > the host so we need a patch on hosted-engine-setup. Is it possible to change the database option in engine via cloud-init - what we need to do is enable Gluster service on "Default" cluster.
(In reply to Sahina Bose from comment #6) > (In reply to Simone Tiraboschi from comment #5) > > (In reply to Sahina Bose from comment #4) > > > can be done via cloud-init script. Simone, can you help with pointers on how > > > to do this? > > > > No, cloud-init it's used to provide the initial configuration of the engine > > VM while here the issue is on the initial configuration of the firewall on > > the host so we need a patch on hosted-engine-setup. > > Is it possible to change the database option in engine via cloud-init - > what we need to do is enable Gluster service on "Default" cluster. No, it's not just that. Hosted-engine is really like the chicken or the egg dilemma: in order to run the engine on a VM the host must be configured as it will be by the engine before having an engine. So, like a lot of other hosted-engine tasks, iptables configuration is a two steps process: 1. hosted-engine-setup has to configure iptables before creating the engine VM to make it accessible (VNC, spice, vdsm, libvirt and, in your case, glusterd being on the same host) 2. when we have an engine, hosted-engine-setup will call hosts.add on the engine to add the host via host-deploy. host-deploy will reconfigure the firewall according to what the engine asks. You idea address step 2 but here the issue is on step 1 since hosted-engine-setup will configure iptables closing the gluster ports and this will bring down the hosted-engine storage and so the issue. To address this we need a patch to hosted-engine-setup to add glusterd iptable rules in @CUSTOM_RULES@ area.
I tried appending also an answerfile with: [environment:default] NETWORK_FIREWALLD_SERVICE/hosted-glusterfs=str:<?xml version="1.0" encoding="utf-8"?><service> <short>hosted-glusterfs</short> <description>oVirt Hosted Engine glusterd service</description> <port protocol="tcp" port="111"/> <port protocol="udp" port="111"/> <port protocol="tcp" port="445"/> <port protocol="tcp" port="631"/> <port protocol="udp" port="963"/> <port protocol="tcp" port="965"/> <port protocol="tcp" port="2049"/></service> OVEHOSTED_NETWORK/firewallManager=str:iptables And this is enough to address the first configuration of iptables (point 1 in https://bugzilla.redhat.com/show_bug.cgi?id=1288979#c7 ); indeed we got: 2016-04-05 12:00:56 DEBUG otopi.context context.dumpEnvironment:510 ENV NETWORK/iptablesRules=str:'# Generated by ovirt-hosted-engine-setup installer #filtering rules *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -i lo -j ACCEPT -A INPUT -p icmp -m icmp --icmp-type any -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 111 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 445 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 631 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 963 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 965 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 2049 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 5900 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 5900 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 5901 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 5901 -j ACCEPT #drop all rule -A INPUT -j REJECT --reject-with icmp-host-prohibited COMMIT ' Then iptables configuration will be overwritten by host-deploy when the engine will try to deploy the host (point 2).
Moving from 4.0 alpha to 4.0 beta since 4.0 alpha has been already released and bug is not ON_QA.
oVirt 4.0 beta has been released, moving to RC milestone.
*** Bug 1356921 has been marked as a duplicate of this bug. ***
Simone, is there a way to change the Default cluster to enable gluster service to address point 2 in comment 7?
Ramesh, can you incorporate the custom script as per Comment 8 in the cockpit-gdeploy plugin?
Actually we are configuring the same ports 3 times in a Hyperconverged Gluster-oVirt setup. 1. Gdeploy configures all the required ports in firewalld while deploying gluster. This happens before 'hosted-engine-setup' 2. hosted-engine-setup configures iptables with all the required ports. 3. host-deploy configures the required ports while add the host to engine. First step is already taken care by gdeploy. For 2, we need to pass an answer file as specified in comment#8. This can done through to gdeploy plugin in cockpit-ovirt. This will be transparent to the user. For 3, We need to enable 'Gluster Service' in Default cluster before hosted-engine-setup adds the first host to engine. May be we can do this via cloud-init configurations, but I am not sure. We need input from Simone.
The point is how to instruct the engine to manage also the gluster service on the hosts. This information is managed in the engine at two distinct level: 1. application level 2. cluster level At application level we could set the engine to manage virt, gluster or both; we could control this value from engine-setup and so we can pass a value to engine-setup from cloud-init but the default value is already 'both' and so there is no value acting here since we are already fine. Once the application mode is set on 'both' we act more specifically at cluster level; unfortunately this is not managed by engine-setup and the default logic is a bit counter-intuitive: gluster service will be activate for the default cluster if and only if the application mode is set to gluster only, if the application mode is set to virt only or both only virt service will be activated: https://gerrit.ovirt.org/gitweb?p=ovirt-engine.git;a=blob;f=packaging/setup/plugins/ovirt-engine-setup/ovirt-engine/config/appmode.py;h=702cf6a2858099bb32cc114963016d958dd92ae3;hb=refs/heads/master#l136 Two options here: 1. patch engine-setup to change the default behavior enabling also gluster on the default cluster if the application mode is set on both (which is the default mode). This has a lot of possible drawbacks since it's changing a the default behavior. 2. patch ovirt-hosted-engine-setup to change the cluster capabilities via REST API before adding the first host.
(In reply to Simone Tiraboschi from comment #18) > Two options here: > 1. patch engine-setup to change the default behavior enabling also gluster > on the default cluster if the application mode is set on both (which is the > default mode). This has a lot of possible drawbacks since it's changing a > the default behavior. > 2. patch ovirt-hosted-engine-setup to change the cluster capabilities via > REST API before adding the first host. Patch https://gerrit.ovirt.org/#/c/70670 implements the first proposal, https://gerrit.ovirt.org/#/c/70685 the second. The second proposal seams less risky.
oVirt 4.1.0 GA has been released, re-targeting to 4.1.1. Please check if this issue is correctly targeted or already included in 4.1.0.
*** Bug 1370141 has been marked as a duplicate of this bug. ***
Tested with RHV 4.1.1-6 When the new host is added to cluster which is capable of gluster + virt, glusterd port is configured open