Bug 1288979 - [HC] glusterd port was not opened, when automatically configuring firewall in hosted-engine setup
Summary: [HC] glusterd port was not opened, when automatically configuring firewall in...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-hosted-engine-setup
Classification: oVirt
Component: General
Version: 1.3.0
Hardware: x86_64
OS: Linux
high
high with 1 vote
Target Milestone: ovirt-4.1.1
: 2.1.0.2
Assignee: Simone Tiraboschi
QA Contact: SATHEESARAN
URL:
Whiteboard:
: 1356921 1370141 (view as bug list)
Depends On:
Blocks: Gluster-HC-2
TreeView+ depends on / blocked
 
Reported: 2015-12-07 07:17 UTC by SATHEESARAN
Modified: 2017-04-21 09:46 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Glusterd port was not opened, when automatically configuring firewall in hosted-engine setup but glusterd port is required for hyper-converged setup. Fixing it.
Clone Of:
Environment:
RHEV + RHGS ( Hyperconverged Infra )
Last Closed: 2017-04-21 09:46:27 UTC
oVirt Team: Gluster
Embargoed:
rule-engine: ovirt-4.1+
rule-engine: planning_ack+
sbonazzo: devel_ack+
rule-engine: testing_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 70670 0 master ABANDONED packaging: setup: change the default cluster appmode behavior 2017-01-17 17:16:33 UTC
oVirt gerrit 70685 0 master MERGED hc-gluster: optionally enabling the gluster service 2017-02-02 12:43:22 UTC
oVirt gerrit 71573 0 ovirt-hosted-engine-setup-2.1 MERGED hc-gluster: optionally enabling the gluster service 2017-02-08 20:05:29 UTC

Description SATHEESARAN 2015-12-07 07:17:39 UTC
Description of problem:
-----------------------
glusterd port was not opened after automatically configuring firewall

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
RHEV 3.6
ovirt-host-deploy-1.4.0-1.el7ev.noarch
ovirt-hosted-engine-ha-1.3.2.1-1.el7ev.noarch

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
1. Deploy the hosted engine on a host with automatic/default firewall configuration

2. Check whether glusterd port ( 24007 ) is opened

Actual results:
---------------
glusterd port not opened by default

Expected results:
-----------------
glusterd port should be opened

Additional info:
----------------
glusterfs bricks ports are opened

Current rules available,
# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED
ACCEPT     icmp --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:54321
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:sunrpc
ACCEPT     udp  --  anywhere             anywhere             udp dpt:sunrpc
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:ssh
ACCEPT     udp  --  anywhere             anywhere             udp dpt:snmp
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:16514
ACCEPT     tcp  --  anywhere             anywhere             multiport dports rockwell-csp2
ACCEPT     tcp  --  anywhere             anywhere             multiport dports rfb:6923
ACCEPT     tcp  --  anywhere             anywhere             multiport dports 49152:49216
REJECT     all  --  anywhere             anywhere             reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
REJECT     all  --  anywhere             anywhere             PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Comment 1 Sahina Bose 2015-12-08 06:19:07 UTC
In HC mode, gluster firewall rules also need to be applied - currently only virt rules are applied.

Comment 2 Simone Tiraboschi 2015-12-22 15:25:55 UTC
Workaround: avoid having hosted-engine-setup configuring iptables for you and manually do it responding no to:

          iptables was detected on your computer, do you wish setup to configure it? (Yes, No)[Yes]:

Comment 3 Will Dennis 2015-12-30 03:34:18 UTC
Also should handle non-standard Gluster port range where Gluster is used in a hyperconverged setup and the user follows the oVirt docs direction[1] of:

“By default gluster uses a port that vdsm also wants, so we need to change base-port setting avoiding the clash between the two daemons. We need to add

option base-port 49217

to /etc/glusterfs/glusterd.vol

and ensure glusterd service is enabled and started before proceeding.”


[1] http://www.ovirt.org/Features/Self_Hosted_Engine_Hyper_Converged_Gluster_Support

Comment 4 Sahina Bose 2016-03-31 09:27:02 UTC
can be done via cloud-init script. Simone, can you help with pointers on how to do this?

Comment 5 Simone Tiraboschi 2016-03-31 09:42:34 UTC
(In reply to Sahina Bose from comment #4)
> can be done via cloud-init script. Simone, can you help with pointers on how
> to do this?

No, cloud-init it's used to provide the initial configuration of the engine VM while here the issue is on the initial configuration of the firewall on the host so we need a patch on hosted-engine-setup.

Comment 6 Sahina Bose 2016-03-31 12:23:54 UTC
(In reply to Simone Tiraboschi from comment #5)
> (In reply to Sahina Bose from comment #4)
> > can be done via cloud-init script. Simone, can you help with pointers on how
> > to do this?
> 
> No, cloud-init it's used to provide the initial configuration of the engine
> VM while here the issue is on the initial configuration of the firewall on
> the host so we need a patch on hosted-engine-setup.

Is it possible to change the database option in engine via cloud-init  - what we need to do is enable Gluster service on "Default" cluster.

Comment 7 Simone Tiraboschi 2016-03-31 14:51:46 UTC
(In reply to Sahina Bose from comment #6)
> (In reply to Simone Tiraboschi from comment #5)
> > (In reply to Sahina Bose from comment #4)
> > > can be done via cloud-init script. Simone, can you help with pointers on how
> > > to do this?
> > 
> > No, cloud-init it's used to provide the initial configuration of the engine
> > VM while here the issue is on the initial configuration of the firewall on
> > the host so we need a patch on hosted-engine-setup.
> 
> Is it possible to change the database option in engine via cloud-init  -
> what we need to do is enable Gluster service on "Default" cluster.

No, it's not just that.
Hosted-engine is really like the chicken or the egg dilemma: in order to run the engine on a VM the host must be configured as it will be by the engine before having an engine.

So, like a lot of other hosted-engine tasks, iptables configuration is a two steps process:
1. hosted-engine-setup has to configure iptables before creating the engine VM to make it accessible (VNC, spice, vdsm, libvirt and, in your case, glusterd being on the same host)
2. when we have an engine, hosted-engine-setup will call hosts.add on the engine to add the host via host-deploy. host-deploy will reconfigure the firewall according to what the engine asks.

You idea address step 2 but here the issue is on step 1 since hosted-engine-setup will configure iptables closing the gluster ports and this will bring down the hosted-engine storage and so the issue.

To address this we need a patch to hosted-engine-setup to add glusterd iptable rules in @CUSTOM_RULES@ area.

Comment 8 Simone Tiraboschi 2016-04-05 10:33:42 UTC
I tried appending also an answerfile with:

[environment:default]
NETWORK_FIREWALLD_SERVICE/hosted-glusterfs=str:<?xml version="1.0" encoding="utf-8"?><service>    <short>hosted-glusterfs</short>    <description>oVirt Hosted Engine glusterd service</description>    <port protocol="tcp" port="111"/>    <port protocol="udp" port="111"/>    <port protocol="tcp" port="445"/>    <port protocol="tcp" port="631"/>    <port protocol="udp" port="963"/>    <port protocol="tcp" port="965"/>    <port protocol="tcp" port="2049"/></service>
OVEHOSTED_NETWORK/firewallManager=str:iptables

And this is enough to address the first configuration of iptables (point 1 in https://bugzilla.redhat.com/show_bug.cgi?id=1288979#c7 ); indeed we got:

2016-04-05 12:00:56 DEBUG otopi.context context.dumpEnvironment:510 ENV NETWORK/iptablesRules=str:'# Generated by ovirt-hosted-engine-setup installer
#filtering rules
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -i lo -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type any -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT
-A INPUT -p udp -m state --state NEW -m udp --dport 111 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 445 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 631 -j ACCEPT
-A INPUT -p udp -m state --state NEW -m udp --dport 963 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 965 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 2049 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 5900 -j ACCEPT
-A INPUT -p udp -m state --state NEW -m udp --dport 5900 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 5901 -j ACCEPT
-A INPUT -p udp -m state --state NEW -m udp --dport 5901 -j ACCEPT

#drop all rule
-A INPUT -j REJECT --reject-with icmp-host-prohibited
COMMIT
'

Then iptables configuration will be overwritten by host-deploy when the engine will try to deploy the host (point 2).

Comment 9 Sandro Bonazzola 2016-05-02 10:05:41 UTC
Moving from 4.0 alpha to 4.0 beta since 4.0 alpha has been already released and bug is not ON_QA.

Comment 10 Yaniv Lavi 2016-05-23 13:20:14 UTC
oVirt 4.0 beta has been released, moving to RC milestone.

Comment 11 Yaniv Lavi 2016-05-23 13:23:48 UTC
oVirt 4.0 beta has been released, moving to RC milestone.

Comment 14 Sahina Bose 2016-12-22 06:28:42 UTC
*** Bug 1356921 has been marked as a duplicate of this bug. ***

Comment 15 Sahina Bose 2016-12-22 06:32:27 UTC
Simone, is there a way to change the Default cluster to enable gluster service to address point 2 in comment 7?

Comment 16 Sahina Bose 2016-12-22 06:34:16 UTC
Ramesh, can you incorporate the custom script as per Comment 8 in the cockpit-gdeploy plugin?

Comment 17 Ramesh N 2016-12-22 07:23:33 UTC
Actually we are configuring the same ports 3 times in a Hyperconverged Gluster-oVirt setup.

1. Gdeploy configures all the required ports in firewalld while deploying gluster. This happens before 'hosted-engine-setup'
2. hosted-engine-setup configures iptables with all the required ports.
3. host-deploy configures the required ports while add the host to engine.

First step is already taken care by gdeploy.

For 2, we need to pass an answer file as specified in comment#8. This can done through to gdeploy plugin in cockpit-ovirt. This will be transparent to the user. 

For 3, We need to enable 'Gluster Service' in Default cluster before hosted-engine-setup adds the first host to engine. May be we can do this via cloud-init configurations, but I am not sure. We need input from Simone.

Comment 18 Simone Tiraboschi 2017-01-17 14:42:15 UTC
The point is how to instruct the engine to manage also the gluster service on the hosts.
This information is managed in the engine at two distinct level:
1. application level
2. cluster level

At application level we could set the engine to manage virt, gluster or both; we could control this value from engine-setup and so we can pass a value to engine-setup from cloud-init but the default value is already 'both' and so there is no value acting here since we are already fine.

Once the application mode is set on 'both' we act more specifically at cluster level; unfortunately this is not managed by engine-setup and the default logic is a bit counter-intuitive:
gluster service will be activate for the default cluster if and only if the application mode is set to gluster only, if the application mode is set to virt only or both only virt service will be activated:
https://gerrit.ovirt.org/gitweb?p=ovirt-engine.git;a=blob;f=packaging/setup/plugins/ovirt-engine-setup/ovirt-engine/config/appmode.py;h=702cf6a2858099bb32cc114963016d958dd92ae3;hb=refs/heads/master#l136

Two options here:
1. patch engine-setup to change the default behavior enabling also gluster on the default cluster if the application mode is set on both (which is the default mode). This has a lot of possible drawbacks since it's changing a the default behavior.
2. patch ovirt-hosted-engine-setup to change the cluster capabilities via REST API before adding the first host.

Comment 19 Simone Tiraboschi 2017-01-17 16:20:50 UTC
(In reply to Simone Tiraboschi from comment #18)
> Two options here:
> 1. patch engine-setup to change the default behavior enabling also gluster
> on the default cluster if the application mode is set on both (which is the
> default mode). This has a lot of possible drawbacks since it's changing a
> the default behavior.
> 2. patch ovirt-hosted-engine-setup to change the cluster capabilities via
> REST API before adding the first host.

Patch https://gerrit.ovirt.org/#/c/70670 implements the first proposal, https://gerrit.ovirt.org/#/c/70685 the second.

The second proposal seams less risky.

Comment 20 Sandro Bonazzola 2017-02-01 16:02:01 UTC
oVirt 4.1.0 GA has been released, re-targeting to 4.1.1.
Please check if this issue is correctly targeted or already included in 4.1.0.

Comment 21 Sahina Bose 2017-02-21 10:34:10 UTC
*** Bug 1370141 has been marked as a duplicate of this bug. ***

Comment 22 SATHEESARAN 2017-04-03 11:06:16 UTC
Tested with RHV 4.1.1-6 

When the new host is added to cluster which is capable of gluster + virt, glusterd port is configured open


Note You need to log in before you can comment on or make changes to this bug.