Bug 1222421

Summary: Failed to deploy additional host due to unconfigured iptables
Product: Red Hat Enterprise Virtualization Manager Reporter: rhev-integ
Component: ovirt-hosted-engine-setupAssignee: Yedidyah Bar David <didi>
Status: CLOSED ERRATA QA Contact: Artyom <alukiano>
Severity: high Docs Contact:
Priority: high    
Version: 3.5.1CC: aburden, dfediuck, didi, ecohen, gklein, istein, jbelka, lsurette, nsednev, pstehlik, sbonazzo, sherold, ylavi
Target Milestone: ---Keywords: Triaged, ZStream
Target Release: 3.5.3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: integration
Fixed In Version: ovirt-hosted-engine-setup-1.2.4-2.el6ev Doc Type: Bug Fix
Doc Text:
Previously, in the Self-Hosted Engine 3.5.1, choosing to configure iptables during first host deployment did not configure iptables on additional host deployment. If the existing iptables configuration on the additional host did not allow VDSM to access the engine, deployment would fail. Now, choosing to configure iptables during first host deployment also correctly handles iptables configuration for additional host deployment, and deployment succeeds as expected.
Story Points: ---
Clone Of: 1221148 Environment:
Last Closed: 2015-06-15 13:17:31 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1221148    
Bug Blocks:    
Attachments:
Description Flags
logs from blue (host that being added as additional host)
none
sosreport-blue-vdsc.qa.lab.tlv.redhat.com-20150531210307.tar.xz none

Comment 3 Yedidyah Bar David 2015-05-19 07:31:02 UTC
Edited doc text, copying it also here, to record things that the doc team will not find relevant for the release notes:

Cause: 

Until 3.5.0, hosted-engine --deploy asked about iptables on first host, and optionally configured it by itself, but also always (both first host and additional) requested the engine to add the host, and in this request unconditionally asked to configure iptables.

We had a bug (#1080823) to make this optional, which was fixed for 3.5.1. However, the fix wasn't perfect, and introduced the current bug: on additional host deploy, when requesting the engine to add the host, we never ask to configure iptables.

Consequence: 

On additional hosted-engine --deploy, after the engine adds the host, it tries to connect to vdsm on it, and fails, and deploy aborts with an error message.

Fix: 

Now, if on first host deploy the user chose to configure iptables, then on additional host deploy we both configure iptables ourselves, and also, when requesting the engine to add the host, ask to configure iptables on it.

Result: 

The engine does not fail to connect to vdsm, and deploy succeeds.

Also:

1. Workaround (for 3.5.1 users): Before additional host deploy, manually configure iptables on it with the same rules as on the first host.

2. On additional host deploy, we do not ask the user about iptables, just output:
[ INFO  ] Additional host deployment, firewall manager is 'iptables'

3. This is controlled by a setting in the answer file, which is normally copied by deploy from the first host. So if a user chose to configure iptables on first host deploy, but wants additional ones to not configure, the user can edit /etc/ovirt-hosted-engine/answers.conf on the first host, and set there:
OVEHOSTED_NETWORK/firewallManager=none:None

Deploy on additional hosts, if accepting to copy this answer file, will then not automatically configure iptables (actually I didn't check, I think it will ask).

Comment 6 Nikolai Sednev 2015-05-31 18:09:31 UTC
[root@blue-vdsc ~]# rpm -qa vdsm libvirt* sanlock* qemu-kvm* ovirt* mom
libvirt-python-1.2.8-7.el7_1.1.x86_64
libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.3.x86_64
mom-0.4.1-5.el7ev.noarch
vdsm-4.16.18-1.el7ev.x86_64
sanlock-3.2.2-2.el7.x86_64
sanlock-lib-3.2.2-2.el7.x86_64
sanlock-python-3.2.2-2.el7.x86_64
ovirt-host-deploy-1.3.0-2.el7ev.noarch
libvirt-client-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-interface-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-secret-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-qemu-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-storage-1.2.8-16.el7_1.3.x86_64
ovirt-hosted-engine-ha-1.2.6-2.el7ev.noarch
qemu-kvm-rhev-2.1.2-23.el7_1.3.x86_64
libvirt-daemon-1.2.8-16.el7_1.3.x86_64
libvirt-lock-sanlock-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-network-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-kvm-1.2.8-16.el7_1.3.x86_64
ovirt-hosted-engine-setup-1.2.4-2.el7ev.noarch
qemu-kvm-common-rhev-2.1.2-23.el7_1.3.x86_64






[root@blue-vdsc ~]# iptables -A INPUT -p tcp --dport 22 -j ACCEPT
[root@blue-vdsc ~]# iptables -A OUTPUT -p tcp --sport 22 -j ACCEPT
[root@blue-vdsc ~]# iptables -A INPUT -j DROP                     
[root@blue-vdsc ~]# iptables -A OUTPUT -j DROP                    
[root@blue-vdsc ~]# iptables -L
Chain INPUT (policy ACCEPT)    
target     prot opt source               destination
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:ssh
DROP       all  --  anywhere             anywhere

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     tcp  --  anywhere             anywhere             tcp spt:ssh
DROP       all  --  anywhere             anywhere
[root@blue-vdsc ~]# hosted-engine --deploy
[ INFO  ] Stage: Initializing
[ INFO  ] Generating a temporary VNC password.
[ INFO  ] Stage: Environment setup
          Continuing will configure this host for serving as hypervisor and create a VM where you have to install oVirt Engine afterwards.
          Are you sure you want to continue? (Yes, No)[Yes]:
          Configuration files: []
          Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150531204600-uxf6cj.log
          Version: otopi-1.3.2 (otopi-1.3.2-1.el7ev)
          It has been detected that this program is executed through an SSH connection without using screen.
          Continuing with the installation may lead to broken installation if the network connection fails.
          It is highly recommended to abort the installation and run it inside a screen session using command "screen".
          Do you want to continue anyway? (Yes, No)[No]: yes
[ INFO  ] Hardware supports virtualization
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Generating libvirt-spice certificates
[ ERROR ] Failed to execute stage 'Environment setup': timed out
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150531205814.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[root@blue-vdsc ~]#

Comment 7 Nikolai Sednev 2015-05-31 18:10:36 UTC
Created attachment 1032940 [details]
logs from blue (host that being added as additional host)

Comment 8 Nikolai Sednev 2015-05-31 18:35:19 UTC
Created attachment 1032951 [details]
sosreport-blue-vdsc.qa.lab.tlv.redhat.com-20150531210307.tar.xz

Comment 9 Yedidyah Bar David 2015-06-01 07:00:36 UTC
(In reply to Nikolai Sednev from comment #6)
> [root@blue-vdsc ~]# rpm -qa vdsm libvirt* sanlock* qemu-kvm* ovirt* mom
> libvirt-python-1.2.8-7.el7_1.1.x86_64
> libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.3.x86_64
> mom-0.4.1-5.el7ev.noarch
> vdsm-4.16.18-1.el7ev.x86_64
> sanlock-3.2.2-2.el7.x86_64
> sanlock-lib-3.2.2-2.el7.x86_64
> sanlock-python-3.2.2-2.el7.x86_64
> ovirt-host-deploy-1.3.0-2.el7ev.noarch
> libvirt-client-1.2.8-16.el7_1.3.x86_64
> libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.3.x86_64
> libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.3.x86_64
> libvirt-daemon-driver-interface-1.2.8-16.el7_1.3.x86_64
> libvirt-daemon-driver-secret-1.2.8-16.el7_1.3.x86_64
> libvirt-daemon-driver-qemu-1.2.8-16.el7_1.3.x86_64
> libvirt-daemon-driver-storage-1.2.8-16.el7_1.3.x86_64
> ovirt-hosted-engine-ha-1.2.6-2.el7ev.noarch
> qemu-kvm-rhev-2.1.2-23.el7_1.3.x86_64
> libvirt-daemon-1.2.8-16.el7_1.3.x86_64
> libvirt-lock-sanlock-1.2.8-16.el7_1.3.x86_64
> libvirt-daemon-driver-network-1.2.8-16.el7_1.3.x86_64
> libvirt-daemon-kvm-1.2.8-16.el7_1.3.x86_64
> ovirt-hosted-engine-setup-1.2.4-2.el7ev.noarch
> qemu-kvm-common-rhev-2.1.2-23.el7_1.3.x86_64
> 
> 
> 
> 
> 
> 
> [root@blue-vdsc ~]# iptables -A INPUT -p tcp --dport 22 -j ACCEPT
> [root@blue-vdsc ~]# iptables -A OUTPUT -p tcp --sport 22 -j ACCEPT
> [root@blue-vdsc ~]# iptables -A INPUT -j DROP                     
> [root@blue-vdsc ~]# iptables -A OUTPUT -j DROP                    

This means no outgoing connections are permitted (except ssh).

> [root@blue-vdsc ~]# iptables -L
> Chain INPUT (policy ACCEPT)    
> target     prot opt source               destination
> ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:ssh
> DROP       all  --  anywhere             anywhere
> 
> Chain FORWARD (policy ACCEPT)
> target     prot opt source               destination
> 
> Chain OUTPUT (policy ACCEPT)
> target     prot opt source               destination
> ACCEPT     tcp  --  anywhere             anywhere             tcp spt:ssh
> DROP       all  --  anywhere             anywhere
> [root@blue-vdsc ~]# hosted-engine --deploy
> [ INFO  ] Stage: Initializing
> [ INFO  ] Generating a temporary VNC password.
> [ INFO  ] Stage: Environment setup
>           Continuing will configure this host for serving as hypervisor and
> create a VM where you have to install oVirt Engine afterwards.
>           Are you sure you want to continue? (Yes, No)[Yes]:
>           Configuration files: []
>           Log file:
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150531204600-
> uxf6cj.log
>           Version: otopi-1.3.2 (otopi-1.3.2-1.el7ev)
>           It has been detected that this program is executed through an SSH
> connection without using screen.
>           Continuing with the installation may lead to broken installation
> if the network connection fails.
>           It is highly recommended to abort the installation and run it
> inside a screen session using command "screen".
>           Do you want to continue anyway? (Yes, No)[No]: yes
> [ INFO  ] Hardware supports virtualization
> [ INFO  ] Stage: Environment packages setup
> [ INFO  ] Stage: Programs detection
> [ INFO  ] Stage: Environment setup
> [ INFO  ] Waiting for VDSM hardware info
> [ INFO  ] Waiting for VDSM hardware info
> [ INFO  ] Waiting for VDSM hardware info
> [ INFO  ] Waiting for VDSM hardware info
> [ INFO  ] Waiting for VDSM hardware info
> [ INFO  ] Waiting for VDSM hardware info
> [ INFO  ] Waiting for VDSM hardware info
> [ INFO  ] Waiting for VDSM hardware info
> [ INFO  ] Waiting for VDSM hardware info
> [ INFO  ] Waiting for VDSM hardware info
> [ INFO  ] Generating libvirt-spice certificates
> [ ERROR ] Failed to execute stage 'Environment setup': timed out
> [ INFO  ] Stage: Clean up
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150531205814.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [root@blue-vdsc ~]#

Not sure why moving back to assigned when it's obviously not the failure described in the bug, which was much later in the process, and with a different message.

Anyway, please try again with some reasonable iptables rules, including allowing all (or specific, as needed, if you prefer) outgoing connections.

Comment 10 Nikolai Sednev 2015-06-01 08:08:32 UTC
(In reply to Yedidyah Bar David from comment #9)
> (In reply to Nikolai Sednev from comment #6)
> > [root@blue-vdsc ~]# rpm -qa vdsm libvirt* sanlock* qemu-kvm* ovirt* mom
> > libvirt-python-1.2.8-7.el7_1.1.x86_64
> > libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.3.x86_64
> > mom-0.4.1-5.el7ev.noarch
> > vdsm-4.16.18-1.el7ev.x86_64
> > sanlock-3.2.2-2.el7.x86_64
> > sanlock-lib-3.2.2-2.el7.x86_64
> > sanlock-python-3.2.2-2.el7.x86_64
> > ovirt-host-deploy-1.3.0-2.el7ev.noarch
> > libvirt-client-1.2.8-16.el7_1.3.x86_64
> > libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.3.x86_64
> > libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.3.x86_64
> > libvirt-daemon-driver-interface-1.2.8-16.el7_1.3.x86_64
> > libvirt-daemon-driver-secret-1.2.8-16.el7_1.3.x86_64
> > libvirt-daemon-driver-qemu-1.2.8-16.el7_1.3.x86_64
> > libvirt-daemon-driver-storage-1.2.8-16.el7_1.3.x86_64
> > ovirt-hosted-engine-ha-1.2.6-2.el7ev.noarch
> > qemu-kvm-rhev-2.1.2-23.el7_1.3.x86_64
> > libvirt-daemon-1.2.8-16.el7_1.3.x86_64
> > libvirt-lock-sanlock-1.2.8-16.el7_1.3.x86_64
> > libvirt-daemon-driver-network-1.2.8-16.el7_1.3.x86_64
> > libvirt-daemon-kvm-1.2.8-16.el7_1.3.x86_64
> > ovirt-hosted-engine-setup-1.2.4-2.el7ev.noarch
> > qemu-kvm-common-rhev-2.1.2-23.el7_1.3.x86_64
> > 
> > 
> > 
> > 
> > 
> > 
> > [root@blue-vdsc ~]# iptables -A INPUT -p tcp --dport 22 -j ACCEPT
> > [root@blue-vdsc ~]# iptables -A OUTPUT -p tcp --sport 22 -j ACCEPT
> > [root@blue-vdsc ~]# iptables -A INPUT -j DROP                     
> > [root@blue-vdsc ~]# iptables -A OUTPUT -j DROP                    
> 
> This means no outgoing connections are permitted (except ssh).
> 
> > [root@blue-vdsc ~]# iptables -L
> > Chain INPUT (policy ACCEPT)    
> > target     prot opt source               destination
> > ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:ssh
> > DROP       all  --  anywhere             anywhere
> > 
> > Chain FORWARD (policy ACCEPT)
> > target     prot opt source               destination
> > 
> > Chain OUTPUT (policy ACCEPT)
> > target     prot opt source               destination
> > ACCEPT     tcp  --  anywhere             anywhere             tcp spt:ssh
> > DROP       all  --  anywhere             anywhere
> > [root@blue-vdsc ~]# hosted-engine --deploy
> > [ INFO  ] Stage: Initializing
> > [ INFO  ] Generating a temporary VNC password.
> > [ INFO  ] Stage: Environment setup
> >           Continuing will configure this host for serving as hypervisor and
> > create a VM where you have to install oVirt Engine afterwards.
> >           Are you sure you want to continue? (Yes, No)[Yes]:
> >           Configuration files: []
> >           Log file:
> > /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150531204600-
> > uxf6cj.log
> >           Version: otopi-1.3.2 (otopi-1.3.2-1.el7ev)
> >           It has been detected that this program is executed through an SSH
> > connection without using screen.
> >           Continuing with the installation may lead to broken installation
> > if the network connection fails.
> >           It is highly recommended to abort the installation and run it
> > inside a screen session using command "screen".
> >           Do you want to continue anyway? (Yes, No)[No]: yes
> > [ INFO  ] Hardware supports virtualization
> > [ INFO  ] Stage: Environment packages setup
> > [ INFO  ] Stage: Programs detection
> > [ INFO  ] Stage: Environment setup
> > [ INFO  ] Waiting for VDSM hardware info
> > [ INFO  ] Waiting for VDSM hardware info
> > [ INFO  ] Waiting for VDSM hardware info
> > [ INFO  ] Waiting for VDSM hardware info
> > [ INFO  ] Waiting for VDSM hardware info
> > [ INFO  ] Waiting for VDSM hardware info
> > [ INFO  ] Waiting for VDSM hardware info
> > [ INFO  ] Waiting for VDSM hardware info
> > [ INFO  ] Waiting for VDSM hardware info
> > [ INFO  ] Waiting for VDSM hardware info
> > [ INFO  ] Generating libvirt-spice certificates
> > [ ERROR ] Failed to execute stage 'Environment setup': timed out
> > [ INFO  ] Stage: Clean up
> > [ INFO  ] Generating answer file
> > '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150531205814.conf'
> > [ INFO  ] Stage: Pre-termination
> > [ INFO  ] Stage: Termination
> > [root@blue-vdsc ~]#
> 
> Not sure why moving back to assigned when it's obviously not the failure
> described in the bug, which was much later in the process, and with a
> different message.
> 
> Anyway, please try again with some reasonable iptables rules, including
> allowing all (or specific, as needed, if you prefer) outgoing connections.

I just followed the exact reproduction steps, which were described as follows:
"Steps to Reproduce:
1. Deploy hosted-engine on first host, accept to automatically configure iptables
2. Install OS on second host, enable iptables and allow only ssh access
3. deploy hosted-engine on second host"

Expected criteria (deployment succeeds) not met, hence reopened this bug.

Comment 11 Yedidyah Bar David 2015-06-01 08:19:28 UTC
(In reply to Nikolai Sednev from comment #10)
> I just followed the exact reproduction steps, which were described as
> follows:
> "Steps to Reproduce:
> 1. Deploy hosted-engine on first host, accept to automatically configure
> iptables
> 2. Install OS on second host, enable iptables and allow only ssh access
> 3. deploy hosted-engine on second host"
> 
> Expected criteria (deployment succeeds) not met, hence reopened this bug.

Very well, sorry for not-well-defined creteria. Please use the following reproduction steps:

1. Deploy hosted-engine on first host, accept to automatically configure iptables
2. Install OS on second host, enable iptables and allow to connect from outside only to ssh.
3. deploy hosted-engine on second host

I personally used:

*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

You can use that with iptables-restore.

I believe that some variation on the above is the default, or at least common, for many installations.

I do not think we should consider more restrictive configurations, such as blocking most/all outgoing connections. People that do that usually know what they are doing, and are prepared to handle that themselves.

Comment 12 Nikolai Sednev 2015-06-02 07:01:47 UTC
(In reply to Yedidyah Bar David from comment #11)
> (In reply to Nikolai Sednev from comment #10)
> > I just followed the exact reproduction steps, which were described as
> > follows:
> > "Steps to Reproduce:
> > 1. Deploy hosted-engine on first host, accept to automatically configure
> > iptables
> > 2. Install OS on second host, enable iptables and allow only ssh access
> > 3. deploy hosted-engine on second host"
> > 
> > Expected criteria (deployment succeeds) not met, hence reopened this bug.
> 
> Very well, sorry for not-well-defined creteria. Please use the following
> reproduction steps:
> 
> 1. Deploy hosted-engine on first host, accept to automatically configure
> iptables
> 2. Install OS on second host, enable iptables and allow to connect from
> outside only to ssh.
> 3. deploy hosted-engine on second host
> 
> I personally used:
> 
> *filter
> :INPUT ACCEPT [0:0]
> :FORWARD ACCEPT [0:0]
> :OUTPUT ACCEPT [0:0]
> -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
> -A INPUT -p icmp -j ACCEPT
> -A INPUT -i lo -j ACCEPT
> -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
> -A INPUT -j REJECT --reject-with icmp-host-prohibited
> -A FORWARD -j REJECT --reject-with icmp-host-prohibited
> COMMIT
> 
> You can use that with iptables-restore.
> 
> I believe that some variation on the above is the default, or at least
> common, for many installations.
> 
> I do not think we should consider more restrictive configurations, such as
> blocking most/all outgoing connections. People that do that usually know
> what they are doing, and are prepared to handle that themselves.

Your filter contains some accept entries that doesn't supposed to be used if you want to allow only ssh inbound/outbound traffic for the tested host, while all other traffic should be dropped, e.g. 
-A INPUT -p icmp -j ACCEPT   -here you're accepting pings and other icmp, not needed and omits the defined criteria.
-A INPUT -i lo -j ACCEPT  -here you're accepting access to host's loopback interface as inbound traffic, not needed and omits the defined criteria.


I don't getting the point, what was made wrong by me?
I've followed this manual http://www.cyberciti.biz/tips/linux-iptables-4-block-all-incoming-traffic-but-allow-ssh.html and there are only 2 lines required for ssh inbound and outbound:
iptables -A INPUT -p tcp --dport 22 -j ACCEPT
iptables -A OUTPUT -p tcp --sport 22 -j ACCEPT
 
First rule will accept incoming (INPUT) tcp connection on port 22 (ssh server) and second rule will send response of incoming ssh server to client (OUTPUT) from our ssh server source port 22.

To prove you that my configuration was correct, I'll give you an example of getting to that host via ssh from several different hosts, as within the rules were both input/output rules for ssh:
Chain INPUT (policy ACCEPT)    
> target     prot opt source               destination
> ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:ssh
Chain OUTPUT (policy ACCEPT)
> > target     prot opt source               destination
> > ACCEPT     tcp  --  anywhere             anywhere             tcp spt:ssh

Deployment process have to reconfigure iptables if selected by customer on host and to add all required ports to be opened, no matter if there were iptables configured or not.

Comment 13 Yedidyah Bar David 2015-06-02 07:55:11 UTC
(In reply to Nikolai Sednev from comment #12)
> I don't getting the point, what was made wrong by me?

The point is that you test another bug.

I'll explain again the current bug:

1. Start with a host A with whatever iptables configuration.
2. deploy hosted-engine on A, accept to reconfigure iptables.
3. deploy on another host B, as an additional host.

Without the fix for this bug, iptables on B will not be re-configured.

If the existing configuration was restrictive enough, deploy will fail.

Otherwise, it will succeed.

For purposes of the current bug, I ignore all configurations that are
strict enough to prevent a _first_ host deploy (i.e. fail step 2).

Even if it succeeds, that's a bug, because we do not ask about firewall on
additional host deploy, and copy the answer file from the first, which
includes an answer to configure it.

The fix just makes sure that iptables is configured also on B (meaning, on
additional host deploy).

> To prove you that my configuration was correct,

I didn't say it wasn't correct, it was just a different bug.

If any configuration you had breaks step 2 above (deploy on _first_ host),
that's a different bug. If you think that it's important, feel free to open it.

Comment 14 Nikolai Sednev 2015-06-02 09:07:07 UTC
(In reply to Yedidyah Bar David from comment #13)
> (In reply to Nikolai Sednev from comment #12)
> > I don't getting the point, what was made wrong by me?
> 
> The point is that you test another bug.
> 
> I'll explain again the current bug:
> 
> 1. Start with a host A with whatever iptables configuration.
> 2. deploy hosted-engine on A, accept to reconfigure iptables.
> 3. deploy on another host B, as an additional host.
> 
> Without the fix for this bug, iptables on B will not be re-configured.
> 
> If the existing configuration was restrictive enough, deploy will fail.
> 
> Otherwise, it will succeed.
> 
> For purposes of the current bug, I ignore all configurations that are
> strict enough to prevent a _first_ host deploy (i.e. fail step 2).
> 
> Even if it succeeds, that's a bug, because we do not ask about firewall on
> additional host deploy, and copy the answer file from the first, which
> includes an answer to configure it.
> 
> The fix just makes sure that iptables is configured also on B (meaning, on
> additional host deploy).
> 
> > To prove you that my configuration was correct,
> 
> I didn't say it wasn't correct, it was just a different bug.
> 
> If any configuration you had breaks step 2 above (deploy on _first_ host),
> that's a different bug. If you think that it's important, feel free to open
> it.

Again, I'm following the exact steps of the bug.
Please don't change the original steps, otherwise you'll be dealing with another test flow scenario and then you'll end up with opening another bug, not related to this one.
I'm following original bug description:
"Steps to Reproduce:
1. Deploy hosted-engine on first host, accept to automatically configure iptables
2. Install OS on second host, enable iptables and allow only ssh access
3. deploy hosted-engine on second host"

Tested here is the "second host".

Result-deployment fails.

Comment 15 Nikolai Sednev 2015-06-02 09:33:45 UTC
Hi Scott,
Can you please decide on this?

Comment 16 Sandro Bonazzola 2015-06-03 06:32:23 UTC
Moving back to QA.
About comment #6, it's a different bug: too restrictive iptables rules on the host prevents vdsm to connec to to libvirt and vdsmcli to connect to vdsmd.
VDSM maybe should detect it while initializing (vdsm-tool configure).
You can open a different bug about it.

Please follow comment #11 in order to test this bz.

Comment 17 Yaniv Lavi 2015-06-03 10:49:29 UTC
(In reply to Nikolai Sednev from comment #15)
> Hi Scott,
> Can you please decide on this?

This is a different issue than described, please open additional bug on blocked outgoing connections and verify this once according the use case described.

Comment 18 Nikolai Sednev 2015-06-04 12:14:20 UTC
The exact error described in this bug does not reproduced on these components:
mom-0.4.1-5.el7ev.noarch
vdsm-4.16.18-1.el7ev.x86_64
sanlock-3.2.2-2.el7.x86_64
sanlock-lib-3.2.2-2.el7.x86_64
sanlock-python-3.2.2-2.el7.x86_64
ovirt-host-deploy-1.3.0-2.el7ev.noarch
libvirt-client-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-interface-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-secret-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-qemu-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-storage-1.2.8-16.el7_1.3.x86_64
ovirt-hosted-engine-ha-1.2.6-2.el7ev.noarch
qemu-kvm-tools-rhev-2.1.2-23.el7_1.3.x86_64
qemu-kvm-rhev-2.1.2-23.el7_1.3.x86_64
libvirt-daemon-1.2.8-16.el7_1.3.x86_64
libvirt-lock-sanlock-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-network-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-kvm-1.2.8-16.el7_1.3.x86_64
ovirt-hosted-engine-setup-1.2.4-2.el7ev.noarch
qemu-kvm-common-rhev-2.1.2-23.el7_1.3.x86_64

Keeping this bug ON_QA until 1227735 is fixed, because following this bug scenario, deployment fails.

Comment 19 Doron Fediuck 2015-06-09 12:34:11 UTC
Gil,
bug 1227735 is closed.
Any reason not to verify this issue?

Comment 20 Gil Klein 2015-06-09 12:41:00 UTC
Looks like this BZ is ON_QA so it will be verified.

Ilanit, could you please assign to the relevant person for verification this week?

Comment 21 Ilanit Stein 2015-06-09 14:12:53 UTC
It already has qa_contact: alukiano.

Comment 22 Artyom 2015-06-10 12:39:28 UTC
Verified on ovirt-hosted-engine-setup-1.2.4-2.el7ev.noarch

iptables on second host before deployment:
# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     udp  --  anywhere             anywhere             udp dpt:domain
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:domain
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootps
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:bootps

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             192.168.122.0/24     ctstate RELATED,ESTABLISHED
ACCEPT     all  --  192.168.122.0/24     anywhere            
ACCEPT     all  --  anywhere             anywhere            
REJECT     all  --  anywhere             anywhere             reject-with icmp-port-unreachable
REJECT     all  --  anywhere             anywhere             reject-with icmp-port-unreachable

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootpc


iptables after deployment:

# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     udp  --  anywhere             anywhere             udp dpt:domain
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:domain
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootps
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:bootps
ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED
ACCEPT     icmp --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:54321
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:sunrpc
ACCEPT     udp  --  anywhere             anywhere             udp dpt:sunrpc
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:ssh
ACCEPT     udp  --  anywhere             anywhere             udp dpt:snmp
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:16514
ACCEPT     tcp  --  anywhere             anywhere             multiport dports rfb:6923
ACCEPT     tcp  --  anywhere             anywhere             multiport dports 49152:49216
REJECT     all  --  anywhere             anywhere             reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             192.168.122.0/24     ctstate RELATED,ESTABLISHED
ACCEPT     all  --  192.168.122.0/24     anywhere            
ACCEPT     all  --  anywhere             anywhere            
REJECT     all  --  anywhere             anywhere             reject-with icmp-port-unreachable
REJECT     all  --  anywhere             anywhere             reject-with icmp-port-unreachable
REJECT     all  --  anywhere             anywhere             PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootpc

Comment 26 errata-xmlrpc 2015-06-15 13:17:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-1108.html