Description of problem (please be detailed as possible and provide log snippests): OCS installation creates Public IPs even with OCP installed as Private cluster on Azure. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE noobaa-mgmt LoadBalancer 172.30.94.221 <External address> 80:31096/TCP,443:32117/TCP,8445:31852/TCP,8446:30608/TCP 179m s3 LoadBalancer 172.30.147.102 <External address> 80:31111/TCP,443:31831/TCP,8444:32682/TCP 179m Version of all relevant components (if applicable): OCS 4.x Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? OCP doc points that no public resources will be created with the install. https://docs.openshift.com/container-platform/4.6/installing/installing_azure/installing-azure-private.html#private-clusters-about-azure_installing-azure-private OCP does adhere to it, but OCS creates Public resources for Noobaa. Is there any workaround available to the best of your knowledge? -> Use an Azure internal loadbalancer https://access.redhat.com/solutions/4824111 -> Changing the svc type from LoadBalancer to ClusterIP, but this may affect Noobaa working. -> Restriction by Azure network ACLs to prevent the public IPs to be reachable. Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 3 Can this issue reproducible? Yes. Can this issue reproduce from the UI? Yes. If this is a regression, please provide more details to justify this: No. Seems to be same beahviour from older releases. Steps to Reproduce: 1. Install OCP in Private mode on Azure Platform. https://docs.openshift.com/container-platform/4.6/installing/installing_azure/installing-azure-private.html 2. Install OCS on it. 3. Check the s3 and noobaa-mgmt endpoints. # oc get svc -n openshift-storage Actual results: The creation of the Public IPs was unexpected and unwanted in internal clusters. Expected results: Restrict Noobaa from creating any Public resources for Private clusters. Additional info:
@etamir FYI
Issue is also seen on IBM ROKS . Adding @akgunjal.com.
Hi, we're facing the same issue on IBM ROKS We tried both the workarounds suggested : 1. Annotate the LB However, this wasn't possible as it said Warning CreatingCloudLoadBalancerFailed 3s ibm-cloud-provider Error on cloud load balancer kube-c2jpf1n20k1p2v6es490-9b45719fc38045b4b9d7fc13326614c4 for service openshift-storage/noobaa-mgmt with UID 9b45719f-c380-45b4-b9d7-fc13326614c4: Failed ensuring LoadBalancer: UpdateLoadBalancer failed: The load balancer was created as a public load balancer. This setting can not be changed 2. Create an egress firewall: Couldn't create it as the link provided worked only if openshift SDN was used, but IBM ROKS uses calico SDN. Also, this is only a policy that can be used to control the traffic. What would be the solution to wanting LBs to be created with private IPs when it's a private cluster?
Nimrod, any suggestions? Do you want a separate bug for IBM ROKS to track this?
Issue is also seen on VMware IPI install (version 4.8) .
Created 2054120 for the backport
*** Bug 2046471 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.10.0 enhancement, security & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:1372
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days