Bug 1264965 - Installing on AWS and OpenStack preqs
Installing on AWS and OpenStack preqs
Status: CLOSED CURRENTRELEASE
Product: OpenShift Container Platform
Classification: Red Hat
Component: Documentation (Show other bugs)
3.0.0
Unspecified Unspecified
medium Severity high
: ---
: ---
Assigned To: Ashley Hardin
Ma xiaoqiang
Vikram Goyal
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-09-21 14:22 EDT by Ryan Howe
Modified: 2016-07-03 20:46 EDT (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-04-06 11:25:22 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ryan Howe 2015-09-21 14:22:11 EDT
Document URL: https://docs.openshift.com/enterprise/3.0/admin_guide/install/prerequisites.html

Section Number and Name: prerequisites

Describe the issue: 

When installing on AWS or openshift we need the following information to let people know that these steps must be followed to have a successful install: 


Set up Security Group
• 22    - ssh
• 80    - Web Apps
• 443   - Web Apps (https)
• 4789  - SDN / VXLAN
• 8443  - Openshift Console
• 10250 - kubelet 
https://github.com/openshift/openshift-ansible/blob/master/README_AWS.md#set-up-security-group


Also define the following variables in /etc/ansible/host 

    openshift_ip
    openshift_public_ip
    openshift_hostname
    openshift_public_hostname

Lastly it might be best to disable cloud-init

 # systemctl stop cloud-init-local.service cloud-init.service cloud-final.service cloud-config.service
    # systemctl disable cloud-init-local.service cloud-init.service cloud-final.service cloud-config.service
    # systemctl mask cloud-init-local.service cloud-init.service cloud-final.service cloud-config.service

Suggestions for improvement: 

Additional information:
Comment 1 Ryan Howe 2015-09-21 14:24:37 EDT
Or add the information on running ansible facts 

Overriding detected ip addresses and hostnames

Some deployments will require that the user override the detected hostnames and ip addresses for the hosts. To see what the default values will be you can run the openshift_facts playbook:

ansible-playbook playbooks/byo/openshift_facts.yml

Now, we want to verify the detected common settings to verify that they are what we expect them to be (if not, we can override them).

    hostname
        Should resolve to the internal ip from the instances themselves.
        openshift_hostname will override.
    ip
        Should be the internal ip of the instance.
        openshift_ip will override.
    public hostname
        Should resolve to the external ip from hosts outside of the cloud
        provider openshift_public_hostname will override.
    public_ip
        Should be the externally accessible ip associated with the instance
        openshift_public_ip will override
    use_openshift_sdn
        Should be true unless the cloud is GCE.
        openshift_use_openshift_sdn overrides


https://github.com/openshift/openshift-ansible/blob/master/README_OSE.md#overriding-detected-ip-addresses-and-hostnames
Comment 3 Jason DeTiberus 2016-01-25 10:59:30 EST
Reasons why a user should override the variables in AWS:

hostname:
  - User is installing in a VPC that is not configured for both 'DNS hostnames' and 'DNS resolution'

ip:
  - None that I am aware of.
  - Possibly if they have multiple network interfaces configured and they want to
    use one other than the default, but the support for this would be dependent
    on setting openshift_node_set_node_ip to True, otherwise the SDN would
    attempt to use the hostname setting or try to resolve the hostname for the
    IP.

public_hostname:
  - A master instance where the VPC subnet is not configured for 'Auto-assign 
    Public IP'. For external access to this master they would need to have an
    ELB or other load balancer configured that would provide the external access
    needed, or they would need to connect over a VPN connection to the internal
    name of the host.
  - A master instance where metadata is disabled.
  - This value isn't actually used by the nodes

public_ip:
  - A master instance where the VPC subnet is not configured for 'Auto-assign 
    Public IP'
  - A master instance where metadata is disabled.
  - This value isn't actually used by the nodes



For Security Groups, we should probably break it down a bit more than just a single security group:

All OpenShift hosts:
  - tcp/22 from host running the installer/Ansible

etcd Security Group:
  - tcp/2379 from Masters
  - tcp/2380 from etcd hosts

Master Security Group:
  - tcp/8443 from 0.0.0.0/0
  - tcp/53 from All OpenShift hosts
  - udp/53 from All OpenShift hosts

Node Security Group:
  - tcp/10250 from Masters
  - tcp/4789 from Nodes

Infrastructure Nodes (ones that can host the openshift-router):
  - tcp/443 from 0.0.0.0/0
  - tcp/80 from 0.0.0.0/0


If configuring ELBs for load balancing the Masters and/or routers, they would also need to configure Ingress and Egress security groups for the ELBs appropriately as well.
Comment 4 Jason DeTiberus 2016-01-25 11:02:54 EST
Ryan,

Can you say why you would suggest disabling cloud-init?

cloud-init can be used for things like automating the docker-storage-setup configuration when spinning up the instances, and for disabling requiretty for the cloud user.

The following is the user data that I use when provisioning nodes in ec2 (they are provisioned with a second volume for docker storage):
#cloud-config

mounts:
- [ xvdb ]

write_files:
- content: |
    DEVS='/dev/xvdb'
    VG=docker_vg
  path: /etc/sysconfig/docker-storage-setup
  owner: root:root
  permissions: '0644'
- path: /etc/sudoers.d/99-openshift-cloud-init-requiretty
  permissions: 440
  content: |
    Defaults:openshift !requiretty

users:
- default

system_info:
  default_user:
    name: openshift
Comment 5 Jason DeTiberus 2016-01-25 11:05:18 EST
Another thing to note in the docs for AWS and OpenStack:

If setting openshift_hostname to something other than the metadata provided private-dns-name value, is that the native cloud integration for those providers will no longer work.

So, for ec2 hosts in particular, they *must* be deployed in a VPC that has both 'DNS hostnames' and 'DNS resolution' enabled, and openshift_hostname should not be overriden.
Comment 6 Ryan Howe 2016-01-25 17:08:20 EST
@Jason 

I am sorry, when ticket was open we thought cloud-init was the reason behind the change in hostnames, come to find out that Openshift Install was gathering facts from the instances meta via http://169.254.169.254/latest/meta-data/

The only values that need to be set are openshift_hostname (in most cases) and security groups to allow traffic via the ports listed here:

https://docs.openshift.com/enterprise/3.1/install_config/install/prerequisites.html#prereq-network-access
Comment 7 Ashley Hardin 2016-03-11 17:43:04 EST
Work in progress:
https://github.com/openshift/openshift-docs/pull/1730
Comment 8 Ashley Hardin 2016-03-17 16:38:25 EDT
@Jason & Ryan,
Please review: https://github.com/openshift/openshift-docs/pull/1730
Comment 9 Gaoyun Pei 2016-03-24 04:49:55 EDT
After setting up a latest ose-3.2 env, list the iptable rule on master, also have node and etcd on it.

#iptables -L -n
...
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:2379
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:2380
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:2049
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:4001
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:8443
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:8444
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:53
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            state NEW udp dpt:53
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:24224
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            state NEW udp dpt:24224
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:2224
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            state NEW udp dpt:5404
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            state NEW udp dpt:5405
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:9090
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:10250
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:80
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:443
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:10255
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            state NEW udp dpt:10255
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            state NEW udp dpt:4789

Found the list contains more ports than the doc, thought this should at least contain the ports in following files:
https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_master/defaults/main.yml
https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_node/defaults/main.yml
https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_storage_nfs/defaults/main.yml
https://github.com/openshift/openshift-ansible/blob/master/roles/haproxy/defaults/main.yml
https://github.com/openshift/openshift-ansible/blob/master/roles/etcd/defaults/main.yaml
https://github.com/openshift/openshift-ansible/blob/master/roles/cockpit/defaults/main.yml
Comment 10 Ashley Hardin 2016-03-28 08:22:43 EDT
@Andrew & Jason, 
These ports are not currently listed in our docs (in the Required Ports section https://docs.openshift.com/enterprise/3.1/install_config/install/prerequisites.html#required-ports or elsewhere).

    5404 and 5405  (os_firewall_allow/ service: Corosync UDP)
    2049 os_firewall_allow/ service: nfs)
    9000 (os_firewall_use_firewalld/ service: haproxy stats)
    9090 (os_firewall_allow/ service: cockpit-ws)

Should we document these? 9090 should probably be left out until Cockpit is documented. What do you think about the rest? Thanks!
Comment 11 Andrew Butcher 2016-03-28 10:38:47 EDT
Yes, we should definitely include these with a little information about when they're used.

> 5404 and 5405  (os_firewall_allow/ service: Corosync UDP)

This is used with pacemaker HA.

> 2049 os_firewall_allow/ service: nfs)

This will be required when provisioning an nfs host as part of the installer but is optional otherwise.

> 9000 (os_firewall_use_firewalld/ service: haproxy stats)

This is optional with native HA. The port is mentioned in the verification section of the advanced installer docs. https://docs.openshift.com/enterprise/3.1/install_config/install/advanced_install.html#verifying-the-installation
Comment 12 Gaoyun Pei 2016-03-29 23:09:05 EDT
Looks great now, move this to verified.
Comment 13 openshift-github-bot 2016-03-30 09:21:41 EDT
Commits pushed to master at https://github.com/openshift/openshift-docs

https://github.com/openshift/openshift-docs/commit/24d4c2dc35a21df620dea26097b9cd72235e5848
Bug 1264965, added Cloud Provider Considerations section

https://github.com/openshift/openshift-docs/commit/994bb35d49786bf2cd3b85d7d4942387478d84e3
Merge pull request #1730 from ahardin-rh/aws-openstack

Bug 1264965, added Cloud Provider Considerations section

Note You need to log in before you can comment on or make changes to this bug.