Bug 1567122 - [RFE] Multiple projects to use same static egress ip to outside [NEEDINFO]
Summary: [RFE] Multiple projects to use same static egress ip to outside
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: RFE
Version: 3.9.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Marc Curry
QA Contact: Xiaoli Tian
URL:
Whiteboard:
: 1652648 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-13 13:02 UTC by Dmitry Zhukovski
Modified: 2021-01-07 09:49 UTC (History)
19 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-01-07 09:49:33 UTC
Target Upstream Version:
shsaxena: needinfo? (mcurry)


Attachments (Terms of Use)

Description Dmitry Zhukovski 2018-04-13 13:02:46 UTC
1. Proposed title of this feature request
[RFE] Multiple projects to use same static egress ip to outside

3. What is the nature and description of the request?
the number of ipv4 addresses are usually not enough in companies. Currently this egress ip thing works that single project in openshift can use that single egress ip. However, the problem is that if we have "multiple teams" that could use same static ip(to save number of ipv4 addresses) to outside we must place these resources to same project currently. Which means that we have only one project which have "huge" amount of pods and configurations.

However, instead of one huge project we were thinking that why we could not share that single static ip in multiple projects? Then the multiple projects will use that same egress node and snat traffic outside from that same address

4. Why does the customer need this? (List the business requirements here)
To shrink a number of allocated IPs for egress traffic

5. How would the customer like to achieve this? (List the functional requirements here)

Kubernetes is missing this feature completely. There is issue available https://github.com/kubernetes/kubernetes/issues/62124 looks like none interested .

May be it is something like https://github.com/openshift/origin/blob/master/pkg/network/node/egressip.go#L330 mark packet should here be the same in all projects (now its different because its using vnid there). It was probably like that before, but then there was that problem where odd egress ips did not work.

6. For each functional requirement listed, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.

Assign several projects to same IP 

7. Is there already an existing RFE upstream or in Red Hat Bugzilla?
no

10. List any affected packages or components.
openshift / egress

Comment 3 Dan Winship 2018-05-23 15:45:51 UTC
(In reply to Dmitry Zhukovski from comment #0)
> May be it is something like
> https://github.com/openshift/origin/blob/master/pkg/network/node/egressip.
> go#L330 mark packet should here be the same in all projects (now its
> different because its using vnid there). It was probably like that before,
> but then there was that problem where odd egress ips did not work.

No, the code already checks before that point that no NetNamespaces share the same egress IP.

We could change this easily enough, but the reason for the existing check is to make sure that admins didn't *accidentally* assign the same egress IP to two namespaces.

> the number of ipv4 addresses are usually not enough in companies.

Indeed. The real problem here is that we're trying to to use 1990s technology (access restrictions based on IP addresses) in a 2010s cloud computing environment. There needs to be a better solution for authenticating pods to the network...

Comment 8 Dan Winship 2018-11-26 16:25:40 UTC
*** Bug 1652648 has been marked as a duplicate of this bug. ***

Comment 14 Marc Curry 2019-04-17 18:55:02 UTC
This is a reasonable request, but we won't realistically have the ability to do work on this in the near future.  With the major refactoring that happened with OpenShift 4 this is unlikely to have priority before Q1CY2020.  We can consider re-addressing this at that time.


Note You need to log in before you can comment on or make changes to this bug.