Bug 1468233 - Allow standard users to manage destination on egress routers managed by replication controllers
Allow standard users to manage destination on egress routers managed by repli...
Status: NEW
Product: OpenShift Container Platform
Classification: Red Hat
Component: RFE (Show other bugs)
3.5.1
Unspecified Unspecified
unspecified Severity medium
: ---
: ---
Assigned To: Marc Curry
Meng Bo
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-07-06 08:17 EDT by Charlie Llewellyn
Modified: 2017-07-12 22:47 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Charlie Llewellyn 2017-07-06 08:17:48 EDT
Description of problem:

It's common for containers deployed in OpenShift to need to connect to legacy environments policed by firewalls filtering on source and destination. The egress router exists to facilitate this but can only be managed by cluster admins which makes some sense as the administrator needs to allow an IP on node network. However, it is usually the developer of the container that needs to state the destination of the egress router. We would like to request that a feature be added to allow the router destination to be updated by standard users.

Secondly the egress router deploys as a cluster admin fine but when deploying the egress router managed by a replication controller the deployment fails with:

"forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]"

It should be possible to deploy the container to be managed by a replication controller without having to allow privileged containers on the project.

Version-Release number of selected component (if applicable):
Tested on:
openshift v3.5.5.24
kubernetes v1.5.2+43a9be4

How reproducible:
Everytime

Steps to Reproduce:
1. Problem 1: Deploy an egress router as per https://docs.openshift.com/container-platform/latest/admin_guide/managing_networking.html#admin-guide-limit-pod-access-egress-router and try to deploy as a standard user
2. Problem 2: Deploy an egress router with replication as per https://docs.openshift.com/container-platform/latest/admin_guide/managing_networking.html#admin-guide-limit-pod-access-egress-router in a project that isn't allowed to deploy privilidged containers
Comment 1 Ben Bennett 2017-07-12 11:11:29 EDT
I'm moving this to an RFE because the goal of the egress router is for the cluster administrators to be able to limit the traffic from a project.  Self-control by the project administrator makes sense too, but is a different use-case.

Trello card https://trello.com/c/X7orBbI7
Comment 2 Dan Winship 2017-07-12 11:37:28 EDT
Sorry, I should have commented on this sooner...

> We would like to request that a feature be added to allow the router
> destination to be updated by standard users.

This is halfway possible by using a ConfigMap to specify the destination. This is described in the 3.6 docs: https://docs.openshift.org/latest/admin_guide/managing_networking.html#admin-guide-manage-pods-egress-router-configmap. The same idea of using a ConfigMap would work in earlier releases, although beware that the example in the 3.6 docs uses two other 3.6-specific features (initContainers and multi-value EGRESS_DESTINATION).

I say it's "halfway possible" because while it's possible for an unprivileged user to edit the ConfigMap, it's not possible for them to restart the egress-router pod after they do so... so we need to do something about that.

> Secondly the egress router deploys as a cluster admin fine but when deploying
> the egress router managed by a replication controller the deployment fails

This is weird... I'd swear this used to work, but I can't get it to work even in a 3.4 cluster now.

Note You need to log in before you can comment on or make changes to this bug.