Bug 1039709 - I.3.5. Security -- OpenShift Node Architecture
Summary: I.3.5. Security -- OpenShift Node Architecture
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Documentation
Version: 2.0.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Julie
QA Contact: ecs-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-12-09 20:43 UTC by Luke Meyer
Modified: 2017-03-08 17:35 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-01-07 23:56:18 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Luke Meyer 2013-12-09 20:43:59 UTC
The implementation has drifted pretty far from this description. Corrections:


"It is important to understand how routing works on a node to better understand the security architect"  => architecture

---
An OpenShift Enterprise node includes a Reverse Proxy Server and a HAProxy Server.
==> The situation has gotten more complex. Also, HAproxy as a front end has gone away. What we probably want to get across is that the gears are isolated to the node's internal network and there are several ways (3 or 4) to get external traffic to them. So I might modify this to something like:
==
An OpenShift Enterprise node includes several front ends to proxy traffic to the gears connected to its internal network.

---
The Reverse Proxy server takes care of external and internal routing, with the main purpose of routing received traffic to the appropriate gear, and limit traffic to ports 80, 443, 8000, and 8443. See Section 4.4, “Network Access” for more information on ports required by OpenShift Enterprise.
==>
The httpd reverse proxy front end routes standard HTTP ports 80 and 443, while the NodeJS front end similarly routes WebSockets HTTP requests from ports 8000 and 8443. The port proxy routes inter-gear traffic via a range of high ports (even gears on the same host do not have direct access to each other). See Section 4.4, “Network Access” for more information on ports required by OpenShift Enterprise.
(Probably best not to discuss the SNI proxy here, but that's another front end for obscure uses.)


---
The HAProxy server acts as a load balancer and forwards requests to different cartridges, or services. Services residing on the same host do not have direct access to each other, but must travel through the HAProxy server. The HAProxy server keeps a list of the requests that are to be forwarded to the appropriate cartridge.
==> HAproxy runs in individual scaled apps acting as a load balancer to the other gears. Here it's being confused with the front end port proxy, which used to be HAproxy. Perhaps this whole paragraph can be axed here? I'm not sure where the discussion of inter-gear traffic belongs, maybe in the previous paragraph, but if it's here, then it might be something like:
==
In a scaled application, at least one gear runs HAproxy to load balance HTTP traffic across the gears in the application, via the inter-gear port proxy.


Also, it's a nit, but the 127.0.0.1 IP address in the diagram shouldn't be that -- gears don't connect to that IP in particular because it's localhost. Instead the addresses could maybe be "127.0.1.1" - "127.0.3.1".


Note You need to log in before you can comment on or make changes to this bug.