Bug 1222421
Summary: | Failed to deploy additional host due to unconfigured iptables | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | rhev-integ | ||||||
Component: | ovirt-hosted-engine-setup | Assignee: | Yedidyah Bar David <didi> | ||||||
Status: | CLOSED ERRATA | QA Contact: | Artyom <alukiano> | ||||||
Severity: | high | Docs Contact: | |||||||
Priority: | high | ||||||||
Version: | 3.5.1 | CC: | aburden, dfediuck, didi, ecohen, gklein, istein, jbelka, lsurette, nsednev, pstehlik, sbonazzo, sherold, ylavi | ||||||
Target Milestone: | --- | Keywords: | Triaged, ZStream | ||||||
Target Release: | 3.5.3 | ||||||||
Hardware: | Unspecified | ||||||||
OS: | Unspecified | ||||||||
Whiteboard: | integration | ||||||||
Fixed In Version: | ovirt-hosted-engine-setup-1.2.4-2.el6ev | Doc Type: | Bug Fix | ||||||
Doc Text: |
Previously, in the Self-Hosted Engine 3.5.1, choosing to configure iptables during first host deployment did not configure iptables on additional host deployment. If the existing iptables configuration on the additional host did not allow VDSM to access the engine, deployment would fail. Now, choosing to configure iptables during first host deployment also correctly handles iptables configuration for additional host deployment, and deployment succeeds as expected.
|
Story Points: | --- | ||||||
Clone Of: | 1221148 | Environment: | |||||||
Last Closed: | 2015-06-15 13:17:31 UTC | Type: | --- | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Bug Depends On: | 1221148 | ||||||||
Bug Blocks: | |||||||||
Attachments: |
|
Comment 3
Yedidyah Bar David
2015-05-19 07:31:02 UTC
[root@blue-vdsc ~]# rpm -qa vdsm libvirt* sanlock* qemu-kvm* ovirt* mom libvirt-python-1.2.8-7.el7_1.1.x86_64 libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.3.x86_64 mom-0.4.1-5.el7ev.noarch vdsm-4.16.18-1.el7ev.x86_64 sanlock-3.2.2-2.el7.x86_64 sanlock-lib-3.2.2-2.el7.x86_64 sanlock-python-3.2.2-2.el7.x86_64 ovirt-host-deploy-1.3.0-2.el7ev.noarch libvirt-client-1.2.8-16.el7_1.3.x86_64 libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.3.x86_64 libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.3.x86_64 libvirt-daemon-driver-interface-1.2.8-16.el7_1.3.x86_64 libvirt-daemon-driver-secret-1.2.8-16.el7_1.3.x86_64 libvirt-daemon-driver-qemu-1.2.8-16.el7_1.3.x86_64 libvirt-daemon-driver-storage-1.2.8-16.el7_1.3.x86_64 ovirt-hosted-engine-ha-1.2.6-2.el7ev.noarch qemu-kvm-rhev-2.1.2-23.el7_1.3.x86_64 libvirt-daemon-1.2.8-16.el7_1.3.x86_64 libvirt-lock-sanlock-1.2.8-16.el7_1.3.x86_64 libvirt-daemon-driver-network-1.2.8-16.el7_1.3.x86_64 libvirt-daemon-kvm-1.2.8-16.el7_1.3.x86_64 ovirt-hosted-engine-setup-1.2.4-2.el7ev.noarch qemu-kvm-common-rhev-2.1.2-23.el7_1.3.x86_64 [root@blue-vdsc ~]# iptables -A INPUT -p tcp --dport 22 -j ACCEPT [root@blue-vdsc ~]# iptables -A OUTPUT -p tcp --sport 22 -j ACCEPT [root@blue-vdsc ~]# iptables -A INPUT -j DROP [root@blue-vdsc ~]# iptables -A OUTPUT -j DROP [root@blue-vdsc ~]# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:ssh DROP all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp spt:ssh DROP all -- anywhere anywhere [root@blue-vdsc ~]# hosted-engine --deploy [ INFO ] Stage: Initializing [ INFO ] Generating a temporary VNC password. [ INFO ] Stage: Environment setup Continuing will configure this host for serving as hypervisor and create a VM where you have to install oVirt Engine afterwards. Are you sure you want to continue? (Yes, No)[Yes]: Configuration files: [] Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150531204600-uxf6cj.log Version: otopi-1.3.2 (otopi-1.3.2-1.el7ev) It has been detected that this program is executed through an SSH connection without using screen. Continuing with the installation may lead to broken installation if the network connection fails. It is highly recommended to abort the installation and run it inside a screen session using command "screen". Do you want to continue anyway? (Yes, No)[No]: yes [ INFO ] Hardware supports virtualization [ INFO ] Stage: Environment packages setup [ INFO ] Stage: Programs detection [ INFO ] Stage: Environment setup [ INFO ] Waiting for VDSM hardware info [ INFO ] Waiting for VDSM hardware info [ INFO ] Waiting for VDSM hardware info [ INFO ] Waiting for VDSM hardware info [ INFO ] Waiting for VDSM hardware info [ INFO ] Waiting for VDSM hardware info [ INFO ] Waiting for VDSM hardware info [ INFO ] Waiting for VDSM hardware info [ INFO ] Waiting for VDSM hardware info [ INFO ] Waiting for VDSM hardware info [ INFO ] Generating libvirt-spice certificates [ ERROR ] Failed to execute stage 'Environment setup': timed out [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150531205814.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [root@blue-vdsc ~]# Created attachment 1032940 [details]
logs from blue (host that being added as additional host)
Created attachment 1032951 [details]
sosreport-blue-vdsc.qa.lab.tlv.redhat.com-20150531210307.tar.xz
(In reply to Nikolai Sednev from comment #6) > [root@blue-vdsc ~]# rpm -qa vdsm libvirt* sanlock* qemu-kvm* ovirt* mom > libvirt-python-1.2.8-7.el7_1.1.x86_64 > libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.3.x86_64 > mom-0.4.1-5.el7ev.noarch > vdsm-4.16.18-1.el7ev.x86_64 > sanlock-3.2.2-2.el7.x86_64 > sanlock-lib-3.2.2-2.el7.x86_64 > sanlock-python-3.2.2-2.el7.x86_64 > ovirt-host-deploy-1.3.0-2.el7ev.noarch > libvirt-client-1.2.8-16.el7_1.3.x86_64 > libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.3.x86_64 > libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.3.x86_64 > libvirt-daemon-driver-interface-1.2.8-16.el7_1.3.x86_64 > libvirt-daemon-driver-secret-1.2.8-16.el7_1.3.x86_64 > libvirt-daemon-driver-qemu-1.2.8-16.el7_1.3.x86_64 > libvirt-daemon-driver-storage-1.2.8-16.el7_1.3.x86_64 > ovirt-hosted-engine-ha-1.2.6-2.el7ev.noarch > qemu-kvm-rhev-2.1.2-23.el7_1.3.x86_64 > libvirt-daemon-1.2.8-16.el7_1.3.x86_64 > libvirt-lock-sanlock-1.2.8-16.el7_1.3.x86_64 > libvirt-daemon-driver-network-1.2.8-16.el7_1.3.x86_64 > libvirt-daemon-kvm-1.2.8-16.el7_1.3.x86_64 > ovirt-hosted-engine-setup-1.2.4-2.el7ev.noarch > qemu-kvm-common-rhev-2.1.2-23.el7_1.3.x86_64 > > > > > > > [root@blue-vdsc ~]# iptables -A INPUT -p tcp --dport 22 -j ACCEPT > [root@blue-vdsc ~]# iptables -A OUTPUT -p tcp --sport 22 -j ACCEPT > [root@blue-vdsc ~]# iptables -A INPUT -j DROP > [root@blue-vdsc ~]# iptables -A OUTPUT -j DROP This means no outgoing connections are permitted (except ssh). > [root@blue-vdsc ~]# iptables -L > Chain INPUT (policy ACCEPT) > target prot opt source destination > ACCEPT tcp -- anywhere anywhere tcp dpt:ssh > DROP all -- anywhere anywhere > > Chain FORWARD (policy ACCEPT) > target prot opt source destination > > Chain OUTPUT (policy ACCEPT) > target prot opt source destination > ACCEPT tcp -- anywhere anywhere tcp spt:ssh > DROP all -- anywhere anywhere > [root@blue-vdsc ~]# hosted-engine --deploy > [ INFO ] Stage: Initializing > [ INFO ] Generating a temporary VNC password. > [ INFO ] Stage: Environment setup > Continuing will configure this host for serving as hypervisor and > create a VM where you have to install oVirt Engine afterwards. > Are you sure you want to continue? (Yes, No)[Yes]: > Configuration files: [] > Log file: > /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150531204600- > uxf6cj.log > Version: otopi-1.3.2 (otopi-1.3.2-1.el7ev) > It has been detected that this program is executed through an SSH > connection without using screen. > Continuing with the installation may lead to broken installation > if the network connection fails. > It is highly recommended to abort the installation and run it > inside a screen session using command "screen". > Do you want to continue anyway? (Yes, No)[No]: yes > [ INFO ] Hardware supports virtualization > [ INFO ] Stage: Environment packages setup > [ INFO ] Stage: Programs detection > [ INFO ] Stage: Environment setup > [ INFO ] Waiting for VDSM hardware info > [ INFO ] Waiting for VDSM hardware info > [ INFO ] Waiting for VDSM hardware info > [ INFO ] Waiting for VDSM hardware info > [ INFO ] Waiting for VDSM hardware info > [ INFO ] Waiting for VDSM hardware info > [ INFO ] Waiting for VDSM hardware info > [ INFO ] Waiting for VDSM hardware info > [ INFO ] Waiting for VDSM hardware info > [ INFO ] Waiting for VDSM hardware info > [ INFO ] Generating libvirt-spice certificates > [ ERROR ] Failed to execute stage 'Environment setup': timed out > [ INFO ] Stage: Clean up > [ INFO ] Generating answer file > '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150531205814.conf' > [ INFO ] Stage: Pre-termination > [ INFO ] Stage: Termination > [root@blue-vdsc ~]# Not sure why moving back to assigned when it's obviously not the failure described in the bug, which was much later in the process, and with a different message. Anyway, please try again with some reasonable iptables rules, including allowing all (or specific, as needed, if you prefer) outgoing connections. (In reply to Yedidyah Bar David from comment #9) > (In reply to Nikolai Sednev from comment #6) > > [root@blue-vdsc ~]# rpm -qa vdsm libvirt* sanlock* qemu-kvm* ovirt* mom > > libvirt-python-1.2.8-7.el7_1.1.x86_64 > > libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.3.x86_64 > > mom-0.4.1-5.el7ev.noarch > > vdsm-4.16.18-1.el7ev.x86_64 > > sanlock-3.2.2-2.el7.x86_64 > > sanlock-lib-3.2.2-2.el7.x86_64 > > sanlock-python-3.2.2-2.el7.x86_64 > > ovirt-host-deploy-1.3.0-2.el7ev.noarch > > libvirt-client-1.2.8-16.el7_1.3.x86_64 > > libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.3.x86_64 > > libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.3.x86_64 > > libvirt-daemon-driver-interface-1.2.8-16.el7_1.3.x86_64 > > libvirt-daemon-driver-secret-1.2.8-16.el7_1.3.x86_64 > > libvirt-daemon-driver-qemu-1.2.8-16.el7_1.3.x86_64 > > libvirt-daemon-driver-storage-1.2.8-16.el7_1.3.x86_64 > > ovirt-hosted-engine-ha-1.2.6-2.el7ev.noarch > > qemu-kvm-rhev-2.1.2-23.el7_1.3.x86_64 > > libvirt-daemon-1.2.8-16.el7_1.3.x86_64 > > libvirt-lock-sanlock-1.2.8-16.el7_1.3.x86_64 > > libvirt-daemon-driver-network-1.2.8-16.el7_1.3.x86_64 > > libvirt-daemon-kvm-1.2.8-16.el7_1.3.x86_64 > > ovirt-hosted-engine-setup-1.2.4-2.el7ev.noarch > > qemu-kvm-common-rhev-2.1.2-23.el7_1.3.x86_64 > > > > > > > > > > > > > > [root@blue-vdsc ~]# iptables -A INPUT -p tcp --dport 22 -j ACCEPT > > [root@blue-vdsc ~]# iptables -A OUTPUT -p tcp --sport 22 -j ACCEPT > > [root@blue-vdsc ~]# iptables -A INPUT -j DROP > > [root@blue-vdsc ~]# iptables -A OUTPUT -j DROP > > This means no outgoing connections are permitted (except ssh). > > > [root@blue-vdsc ~]# iptables -L > > Chain INPUT (policy ACCEPT) > > target prot opt source destination > > ACCEPT tcp -- anywhere anywhere tcp dpt:ssh > > DROP all -- anywhere anywhere > > > > Chain FORWARD (policy ACCEPT) > > target prot opt source destination > > > > Chain OUTPUT (policy ACCEPT) > > target prot opt source destination > > ACCEPT tcp -- anywhere anywhere tcp spt:ssh > > DROP all -- anywhere anywhere > > [root@blue-vdsc ~]# hosted-engine --deploy > > [ INFO ] Stage: Initializing > > [ INFO ] Generating a temporary VNC password. > > [ INFO ] Stage: Environment setup > > Continuing will configure this host for serving as hypervisor and > > create a VM where you have to install oVirt Engine afterwards. > > Are you sure you want to continue? (Yes, No)[Yes]: > > Configuration files: [] > > Log file: > > /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20150531204600- > > uxf6cj.log > > Version: otopi-1.3.2 (otopi-1.3.2-1.el7ev) > > It has been detected that this program is executed through an SSH > > connection without using screen. > > Continuing with the installation may lead to broken installation > > if the network connection fails. > > It is highly recommended to abort the installation and run it > > inside a screen session using command "screen". > > Do you want to continue anyway? (Yes, No)[No]: yes > > [ INFO ] Hardware supports virtualization > > [ INFO ] Stage: Environment packages setup > > [ INFO ] Stage: Programs detection > > [ INFO ] Stage: Environment setup > > [ INFO ] Waiting for VDSM hardware info > > [ INFO ] Waiting for VDSM hardware info > > [ INFO ] Waiting for VDSM hardware info > > [ INFO ] Waiting for VDSM hardware info > > [ INFO ] Waiting for VDSM hardware info > > [ INFO ] Waiting for VDSM hardware info > > [ INFO ] Waiting for VDSM hardware info > > [ INFO ] Waiting for VDSM hardware info > > [ INFO ] Waiting for VDSM hardware info > > [ INFO ] Waiting for VDSM hardware info > > [ INFO ] Generating libvirt-spice certificates > > [ ERROR ] Failed to execute stage 'Environment setup': timed out > > [ INFO ] Stage: Clean up > > [ INFO ] Generating answer file > > '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150531205814.conf' > > [ INFO ] Stage: Pre-termination > > [ INFO ] Stage: Termination > > [root@blue-vdsc ~]# > > Not sure why moving back to assigned when it's obviously not the failure > described in the bug, which was much later in the process, and with a > different message. > > Anyway, please try again with some reasonable iptables rules, including > allowing all (or specific, as needed, if you prefer) outgoing connections. I just followed the exact reproduction steps, which were described as follows: "Steps to Reproduce: 1. Deploy hosted-engine on first host, accept to automatically configure iptables 2. Install OS on second host, enable iptables and allow only ssh access 3. deploy hosted-engine on second host" Expected criteria (deployment succeeds) not met, hence reopened this bug. (In reply to Nikolai Sednev from comment #10) > I just followed the exact reproduction steps, which were described as > follows: > "Steps to Reproduce: > 1. Deploy hosted-engine on first host, accept to automatically configure > iptables > 2. Install OS on second host, enable iptables and allow only ssh access > 3. deploy hosted-engine on second host" > > Expected criteria (deployment succeeds) not met, hence reopened this bug. Very well, sorry for not-well-defined creteria. Please use the following reproduction steps: 1. Deploy hosted-engine on first host, accept to automatically configure iptables 2. Install OS on second host, enable iptables and allow to connect from outside only to ssh. 3. deploy hosted-engine on second host I personally used: *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT You can use that with iptables-restore. I believe that some variation on the above is the default, or at least common, for many installations. I do not think we should consider more restrictive configurations, such as blocking most/all outgoing connections. People that do that usually know what they are doing, and are prepared to handle that themselves. (In reply to Yedidyah Bar David from comment #11) > (In reply to Nikolai Sednev from comment #10) > > I just followed the exact reproduction steps, which were described as > > follows: > > "Steps to Reproduce: > > 1. Deploy hosted-engine on first host, accept to automatically configure > > iptables > > 2. Install OS on second host, enable iptables and allow only ssh access > > 3. deploy hosted-engine on second host" > > > > Expected criteria (deployment succeeds) not met, hence reopened this bug. > > Very well, sorry for not-well-defined creteria. Please use the following > reproduction steps: > > 1. Deploy hosted-engine on first host, accept to automatically configure > iptables > 2. Install OS on second host, enable iptables and allow to connect from > outside only to ssh. > 3. deploy hosted-engine on second host > > I personally used: > > *filter > :INPUT ACCEPT [0:0] > :FORWARD ACCEPT [0:0] > :OUTPUT ACCEPT [0:0] > -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT > -A INPUT -p icmp -j ACCEPT > -A INPUT -i lo -j ACCEPT > -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT > -A INPUT -j REJECT --reject-with icmp-host-prohibited > -A FORWARD -j REJECT --reject-with icmp-host-prohibited > COMMIT > > You can use that with iptables-restore. > > I believe that some variation on the above is the default, or at least > common, for many installations. > > I do not think we should consider more restrictive configurations, such as > blocking most/all outgoing connections. People that do that usually know > what they are doing, and are prepared to handle that themselves. Your filter contains some accept entries that doesn't supposed to be used if you want to allow only ssh inbound/outbound traffic for the tested host, while all other traffic should be dropped, e.g. -A INPUT -p icmp -j ACCEPT -here you're accepting pings and other icmp, not needed and omits the defined criteria. -A INPUT -i lo -j ACCEPT -here you're accepting access to host's loopback interface as inbound traffic, not needed and omits the defined criteria. I don't getting the point, what was made wrong by me? I've followed this manual http://www.cyberciti.biz/tips/linux-iptables-4-block-all-incoming-traffic-but-allow-ssh.html and there are only 2 lines required for ssh inbound and outbound: iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -A OUTPUT -p tcp --sport 22 -j ACCEPT First rule will accept incoming (INPUT) tcp connection on port 22 (ssh server) and second rule will send response of incoming ssh server to client (OUTPUT) from our ssh server source port 22. To prove you that my configuration was correct, I'll give you an example of getting to that host via ssh from several different hosts, as within the rules were both input/output rules for ssh: Chain INPUT (policy ACCEPT) > target prot opt source destination > ACCEPT tcp -- anywhere anywhere tcp dpt:ssh Chain OUTPUT (policy ACCEPT) > > target prot opt source destination > > ACCEPT tcp -- anywhere anywhere tcp spt:ssh Deployment process have to reconfigure iptables if selected by customer on host and to add all required ports to be opened, no matter if there were iptables configured or not. (In reply to Nikolai Sednev from comment #12) > I don't getting the point, what was made wrong by me? The point is that you test another bug. I'll explain again the current bug: 1. Start with a host A with whatever iptables configuration. 2. deploy hosted-engine on A, accept to reconfigure iptables. 3. deploy on another host B, as an additional host. Without the fix for this bug, iptables on B will not be re-configured. If the existing configuration was restrictive enough, deploy will fail. Otherwise, it will succeed. For purposes of the current bug, I ignore all configurations that are strict enough to prevent a _first_ host deploy (i.e. fail step 2). Even if it succeeds, that's a bug, because we do not ask about firewall on additional host deploy, and copy the answer file from the first, which includes an answer to configure it. The fix just makes sure that iptables is configured also on B (meaning, on additional host deploy). > To prove you that my configuration was correct, I didn't say it wasn't correct, it was just a different bug. If any configuration you had breaks step 2 above (deploy on _first_ host), that's a different bug. If you think that it's important, feel free to open it. (In reply to Yedidyah Bar David from comment #13) > (In reply to Nikolai Sednev from comment #12) > > I don't getting the point, what was made wrong by me? > > The point is that you test another bug. > > I'll explain again the current bug: > > 1. Start with a host A with whatever iptables configuration. > 2. deploy hosted-engine on A, accept to reconfigure iptables. > 3. deploy on another host B, as an additional host. > > Without the fix for this bug, iptables on B will not be re-configured. > > If the existing configuration was restrictive enough, deploy will fail. > > Otherwise, it will succeed. > > For purposes of the current bug, I ignore all configurations that are > strict enough to prevent a _first_ host deploy (i.e. fail step 2). > > Even if it succeeds, that's a bug, because we do not ask about firewall on > additional host deploy, and copy the answer file from the first, which > includes an answer to configure it. > > The fix just makes sure that iptables is configured also on B (meaning, on > additional host deploy). > > > To prove you that my configuration was correct, > > I didn't say it wasn't correct, it was just a different bug. > > If any configuration you had breaks step 2 above (deploy on _first_ host), > that's a different bug. If you think that it's important, feel free to open > it. Again, I'm following the exact steps of the bug. Please don't change the original steps, otherwise you'll be dealing with another test flow scenario and then you'll end up with opening another bug, not related to this one. I'm following original bug description: "Steps to Reproduce: 1. Deploy hosted-engine on first host, accept to automatically configure iptables 2. Install OS on second host, enable iptables and allow only ssh access 3. deploy hosted-engine on second host" Tested here is the "second host". Result-deployment fails. Hi Scott, Can you please decide on this? Moving back to QA. About comment #6, it's a different bug: too restrictive iptables rules on the host prevents vdsm to connec to to libvirt and vdsmcli to connect to vdsmd. VDSM maybe should detect it while initializing (vdsm-tool configure). You can open a different bug about it. Please follow comment #11 in order to test this bz. (In reply to Nikolai Sednev from comment #15) > Hi Scott, > Can you please decide on this? This is a different issue than described, please open additional bug on blocked outgoing connections and verify this once according the use case described. The exact error described in this bug does not reproduced on these components: mom-0.4.1-5.el7ev.noarch vdsm-4.16.18-1.el7ev.x86_64 sanlock-3.2.2-2.el7.x86_64 sanlock-lib-3.2.2-2.el7.x86_64 sanlock-python-3.2.2-2.el7.x86_64 ovirt-host-deploy-1.3.0-2.el7ev.noarch libvirt-client-1.2.8-16.el7_1.3.x86_64 libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.3.x86_64 libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.3.x86_64 libvirt-daemon-driver-interface-1.2.8-16.el7_1.3.x86_64 libvirt-daemon-driver-secret-1.2.8-16.el7_1.3.x86_64 libvirt-daemon-driver-qemu-1.2.8-16.el7_1.3.x86_64 libvirt-daemon-driver-storage-1.2.8-16.el7_1.3.x86_64 ovirt-hosted-engine-ha-1.2.6-2.el7ev.noarch qemu-kvm-tools-rhev-2.1.2-23.el7_1.3.x86_64 qemu-kvm-rhev-2.1.2-23.el7_1.3.x86_64 libvirt-daemon-1.2.8-16.el7_1.3.x86_64 libvirt-lock-sanlock-1.2.8-16.el7_1.3.x86_64 libvirt-daemon-driver-network-1.2.8-16.el7_1.3.x86_64 libvirt-daemon-kvm-1.2.8-16.el7_1.3.x86_64 ovirt-hosted-engine-setup-1.2.4-2.el7ev.noarch qemu-kvm-common-rhev-2.1.2-23.el7_1.3.x86_64 Keeping this bug ON_QA until 1227735 is fixed, because following this bug scenario, deployment fails. Gil, bug 1227735 is closed. Any reason not to verify this issue? Looks like this BZ is ON_QA so it will be verified. Ilanit, could you please assign to the relevant person for verification this week? It already has qa_contact: alukiano. Verified on ovirt-hosted-engine-setup-1.2.4-2.el7ev.noarch iptables on second host before deployment: # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT udp -- anywhere anywhere udp dpt:bootps ACCEPT tcp -- anywhere anywhere tcp dpt:bootps Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere 192.168.122.0/24 ctstate RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 anywhere ACCEPT all -- anywhere anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp dpt:bootpc iptables after deployment: # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT udp -- anywhere anywhere udp dpt:bootps ACCEPT tcp -- anywhere anywhere tcp dpt:bootps ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:54321 ACCEPT tcp -- anywhere anywhere tcp dpt:sunrpc ACCEPT udp -- anywhere anywhere udp dpt:sunrpc ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT udp -- anywhere anywhere udp dpt:snmp ACCEPT tcp -- anywhere anywhere tcp dpt:16514 ACCEPT tcp -- anywhere anywhere multiport dports rfb:6923 ACCEPT tcp -- anywhere anywhere multiport dports 49152:49216 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere 192.168.122.0/24 ctstate RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 anywhere ACCEPT all -- anywhere anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp dpt:bootpc Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-1108.html |