Bug 1325098 - ansible should open the '8053' port for skydns on master
Summary: ansible should open the '8053' port for skydns on master
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.2.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Scott Dodson
QA Contact: Ma xiaoqiang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-08 08:44 UTC by Ma xiaoqiang
Modified: 2016-07-04 00:45 UTC (History)
6 users (show)

Fixed In Version: openshift-ansible-3.0.82-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-05-12 16:40:32 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1065 0 normal SHIPPED_LIVE Red Hat OpenShift Enterprise atomic-openshift-utils bug fix update 2016-05-12 20:32:56 UTC

Description Ma xiaoqiang 2016-04-08 08:44:44 UTC
Description of problem:
ansible should open the '8053' port for skydns on master



Version-Release number of selected component (if applicable):
https://github.com/sdodson/openshift-ansible -b cluster-dns


How reproducible:
always

Steps to Reproduce:
1. install env with dnsmasq on node
2. check the dns on node
#nslookup kubernetes.default.svc.cluster.local 


Actual results:
# nslookup kubernetes.default.svc.cluster.local 
;; connection timed out; trying next origin
Server:         192.168.0.233
Address:        192.168.0.233#53

** server can't find kubernetes.default.svc.cluster.local: NXDOMAIN


Expected results:
Should open the '8053' port for skydns on master


Additional info:

Comment 1 Scott Dodson 2016-04-08 13:44:05 UTC
I've updated my branch to open port 8053 when we're enabling dnsmasq. (versions 3.2/1.2 or greater right now)

Please pull the latest to verify

Comment 2 Jason DeTiberus 2016-04-08 20:49:13 UTC
Don't we want to connect over the service IP rather than the master host IP?

Comment 3 Scott Dodson 2016-04-08 20:59:17 UTC
We use the kube service IP however the endpoints defined by that service are node IP addresses so we still require the firewall be opened, right? If and when skydns moves to a pod I guess this would change.

[root@ose3-master ~]# oc describe svc kubernetes               
Name:                   kubernetes                             
Namespace:              default                                
Labels:                 component=apiserver,provider=kubernetes
Selector:               <none>                                 
Type:                   ClusterIP                              
IP:                     172.30.0.1                             
Port:                   https   443/TCP                        
Endpoints:              192.168.122.134:8443                   
Port:                   dns     53/UDP                         
Endpoints:              192.168.122.134:8053                   
Port:                   dns-tcp 53/TCP                         
Endpoints:              192.168.122.134:8053                   
Session Affinity:       None                                   
No events.                                                     

192.168.122.134 being the master's IP.

Comment 4 Ma xiaoqiang 2016-04-11 02:33:38 UTC
1. The conditions for the dns port in the iptables didn't take effect.

Install ose-3.2 by default, check the iptables on the master
<--snip-->
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:8053
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            state NEW udp dpt:8053
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:53
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            state NEW udp dpt:53
<--snip-->

the '53' and '8053' port are opened.

2. the conditions in the iptables and dnsmasq are different.
the condition is 'openshift.common.version_gte_3_1_or_1_1' for dnsmasq, but the condition is 'openshift.common.version_gte_3_2_or_1_2' in the iptables

Comment 5 Scott Dodson 2016-04-19 15:20:40 UTC
Fixed in the PR, waiting for that to merge before I flip this to MODIFIED

Comment 6 Scott Dodson 2016-04-20 14:25:02 UTC
https://github.com/openshift/openshift-ansible/pull/1588 merged

Comment 9 Ma xiaoqiang 2016-04-22 01:27:59 UTC
check on  openshift-ansible-3.0.82-1

#iptables -L -n
Chain OS_FIREWALL_ALLOW (1 references)
target     prot opt source               destination         
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:2379
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:2380
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:4001
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:443
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:8444
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:8053
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            state NEW udp dpt:8053

Move this issue to VERIFIED.

Comment 11 errata-xmlrpc 2016-05-12 16:40:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1065


Note You need to log in before you can comment on or make changes to this bug.