Bug 1954708 - [GSS][RFE] Restrict Noobaa from creating public endpoints for Azure Private Cluster
Summary: [GSS][RFE] Restrict Noobaa from creating public endpoints for Azure Private C...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: Multi-Cloud Object Gateway
Version: 4.6
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ODF 4.10.0
Assignee: Liran Mauda
QA Contact: Ben Eli
URL:
Whiteboard:
: 2046471 (view as bug list)
Depends On: 2027439
Blocks: 2056571
TreeView+ depends on / blocked
 
Reported: 2021-04-28 16:13 UTC by Deepu K S
Modified: 2023-12-08 04:25 UTC (History)
27 users (show)

Fixed In Version: 4.10.0-118
Doc Type: Enhancement
Doc Text:
.NooBaa services update With this update, a new flag is added `disable-load-balancer` that replaces the type of service from LoadBalancer to ClusterIP. This allows you to disable the NooBaa service EXTERNAL-IP.
Clone Of:
Environment:
Last Closed: 2022-04-13 18:49:40 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github noobaa noobaa-operator pull 757 0 None Merged Adding the disable-load-balancer flag that replace the type of service from LoadBalancer to ClusterIP 2022-01-25 11:15:15 UTC
Github red-hat-storage ocs-ci pull 6532 0 None Merged closed loop 1954708: Validate noobaa disableLoadBalancerService 2022-10-10 05:09:56 UTC
Red Hat Knowledge Base (Article) 6970745 0 None None None 2022-08-22 08:26:47 UTC
Red Hat Knowledge Base (Solution) 6615091 0 None None None 2022-08-19 10:38:23 UTC
Red Hat Product Errata RHSA-2022:1372 0 None None None 2022-04-13 18:50:11 UTC

Description Deepu K S 2021-04-28 16:13:03 UTC
Description of problem (please be detailed as possible and provide log
snippests):
OCS installation creates Public IPs even with OCP installed as Private cluster on  Azure.

NAME                       TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)                                                    AGE
noobaa-mgmt                LoadBalancer   172.30.94.221    <External address>   80:31096/TCP,443:32117/TCP,8445:31852/TCP,8446:30608/TCP   179m
s3                         LoadBalancer   172.30.147.102   <External address>  80:31111/TCP,443:31831/TCP,8444:32682/TCP                  179m

Version of all relevant components (if applicable):
OCS 4.x

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
OCP doc points that no public resources will be created with the install.
https://docs.openshift.com/container-platform/4.6/installing/installing_azure/installing-azure-private.html#private-clusters-about-azure_installing-azure-private
OCP does adhere to it, but OCS creates Public resources for Noobaa.

Is there any workaround available to the best of your knowledge?
-> Use an Azure internal loadbalancer
https://access.redhat.com/solutions/4824111
-> Changing the svc type from LoadBalancer to ClusterIP, but this may affect Noobaa working.
-> Restriction by Azure network ACLs to prevent the public IPs to be reachable.

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
3

Can this issue reproducible?
Yes.

Can this issue reproduce from the UI?
Yes.

If this is a regression, please provide more details to justify this:
No. Seems to be same beahviour from older releases.

Steps to Reproduce:
1. Install OCP in Private mode on Azure Platform.
https://docs.openshift.com/container-platform/4.6/installing/installing_azure/installing-azure-private.html
2. Install OCS on it.
3. Check the s3 and noobaa-mgmt endpoints.
# oc get svc -n openshift-storage


Actual results:
The creation of the Public IPs was unexpected and unwanted in internal clusters.

Expected results:
Restrict Noobaa from creating any Public resources for Private clusters.

Additional info:

Comment 4 Nimrod Becker 2021-04-29 06:54:11 UTC
@etamir FYI

Comment 23 Sahina Bose 2021-07-06 08:24:38 UTC
Issue is also seen on IBM ROKS . Adding @akgunjal.com.

Comment 26 Shirisha S Rao 2021-07-12 14:26:10 UTC
Hi, we're facing the same issue on IBM ROKS

We tried both the workarounds suggested :

1. Annotate the LB
     However, this wasn't possible as it said

Warning  CreatingCloudLoadBalancerFailed  3s                   ibm-cloud-provider  Error on cloud load balancer kube-c2jpf1n20k1p2v6es490-9b45719fc38045b4b9d7fc13326614c4 for service openshift-storage/noobaa-mgmt with UID 9b45719f-c380-45b4-b9d7-fc13326614c4: Failed ensuring LoadBalancer: UpdateLoadBalancer failed: The load balancer was created as a public load balancer. This setting can not be changed

2. Create an egress firewall:
     Couldn't create it as the link provided worked only if openshift SDN was used, but IBM ROKS uses calico SDN.
     Also, this is only a policy that can be used to control the traffic.

What would be the solution to wanting LBs to be created with private IPs when it's a private cluster?

Comment 27 Sahina Bose 2021-07-13 03:43:03 UTC
Nimrod, any suggestions? Do you want a separate bug for IBM ROKS to track this?

Comment 47 ghurel 2021-12-10 13:35:10 UTC
Issue is also seen on VMware IPI install (version 4.8) .

Comment 59 Nimrod Becker 2022-02-14 08:41:07 UTC
Created 2054120 for the backport

Comment 71 Nimrod Becker 2022-04-03 07:34:29 UTC
*** Bug 2046471 has been marked as a duplicate of this bug. ***

Comment 75 errata-xmlrpc 2022-04-13 18:49:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.10.0 enhancement, security & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1372

Comment 79 Red Hat Bugzilla 2023-12-08 04:25:23 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.