Bug 1876725 - [RFE] Ability to configure the proxy as ENV var for RHACM.
Summary: [RFE] Ability to configure the proxy as ENV var for RHACM.
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Advanced Cluster Management for Kubernetes
Classification: Red Hat
Component: Cluster Lifecycle
Version: rhacm-2.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: rhacm-2.3
Assignee: Scott Berens
QA Contact: magchen@redthat.com
Christopher Dawson
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-08 04:10 UTC by Nikhil Gupta
Modified: 2024-12-20 19:14 UTC (History)
18 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-09-01 14:05:31 UTC
Target Upstream Version:
Embargoed:
sberens: needinfo-
sberens: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github open-cluster-management backlog issues 8718 0 None None None 2021-02-22 14:37:40 UTC

Description Nikhil Gupta 2020-09-08 04:10:49 UTC
Description of feature:

The goal is to have an environment variable set within RHACM that allows cluster provisioning through the proxy.

Comment 5 Scott Berens 2021-01-19 20:24:25 UTC
This is in planning for RHACM 2.3 release: https://issues.redhat.com/browse/ACM-559

Comment 7 Chris Doan 2021-02-04 22:22:55 UTC
One way to accomplish this is to deploy the podpreset webhook controller (https://www.openshift.com/blog/a-podpreset-based-webhook-admission-controller). 

It will be assumed that you have configured cluster-wide http proxy in the OpenShift already.

STEP 1: Follow the 6 steps listed in the blog post, to deploy the webhook controller.
STEP 2: Then define the following podpreset configuration.

apiVersion: redhatcop.redhat.io/v1alpha1
kind: PodPreset
metadata:
  name: hive-job-provision
spec:
  env:
  - name: HTTP_PROXY
    value: "http://[fe2e:6f44:51d8::134]:3128"
  - name: HTTPS_PROXY
    value: "http://[fe2e:6f44:51d8::134]:3128"
  - name: NO_PROXY
    value: ".cluster.local,.test.example.com,.svc,127.0.0.1,api-int.test.example.com,etcd-0.test.example.com,etcd-1.test.example.com,etcd-2.test.example.com,fe00:1201::/64,fe01::/48,fe02::/112,fe2e:6f44:51d8::/64,localhost"
  selector:
    matchLabels:
      hive.openshift.io/job-type: provision

The proxy settings you can get by querying your cluster proxy configurations. For example, oc get proxy cluster -n openshift-config -o yaml .



NOTE: that we are matching the pod label hive.openshift.io/job-type: provision . This should select for only the hive provisioning jobs. A second podpreset is needed for deprovision as well.
With this in place, you will be able to reach public cloud providers through the http proxy server, and provision a cluster.

NOTE: These steps create a network path from the hub and cloud provider (and eventually the spoke), through the http proxy server. In order to have the provisioned cluster imported into RHACM, post provision, a network path has to be establish in the opposite direction.

Comment 9 Filipe 2021-03-09 22:18:20 UTC
Hello,

I have tested the procedure and I was able to deploy a cluster on Azure. 
Here is the procedure I followed:

## Prerequisites:
### Cert-Manager

Step1: Deploy Cert-Manager. 
I used the Operator Hub to deploy it. 
Once the operator is deployed, you will need to create an instance.  Navigate to Installed Operators and select Cert-Manager.  Click on Create Instance, give it a name and click create

Step 2: Create the ClusterIssuer
Create a file called: cluster-issuer.yaml and with the following:

apiVersion: cert-manager.io/v1alpha3
kind: ClusterIssuer
metadata:
  name: selfsigned-issuer
spec:
  selfSigned: {}

Step 3: Deploy the ClusterIssuer 
To deploy the manifest, run the following command:
oc apply -f cluster-issuer.yaml

You may confirm the installation of the issuer, using the following command:
oc get clusterissuer.cert-manager.io

### Go-lang
Go lang must be installed on your workstation. Please refer to the the official documentation https://learn.go.dev/
Ensure your GOPATH is exported

## Podpreset-Webhook
At this point, you are ready to complete the second phase, deploy the podpreset webhook

Step 1, Clone the repository
git clone https://github.com/redhat-cop/podpreset-webhook
cd podpreset-webhook

Step 2, Install podpreset-webhook within the cluster using the following command:
make deploy IMG=quay.io/redhat-cop/podpreset-webhook:latest

Step 3,  Retrieve the cluster proxy configuration using the following command:
oc get proxy cluster -n openshift-config -o yaml

Step 4, Define the following podpreset configuration using your cluster proxy configurations.

apiVersion: redhatcop.redhat.io/v1alpha1
kind: PodPreset
metadata:
  name: hive-job-provision
spec:
  env:
  - name: HTTP_PROXY
    value: "http://[fe2e:6f44:51d8::134]:3128"
  - name: HTTPS_PROXY
    value: "http://[fe2e:6f44:51d8::134]:3128"
  - name: NO_PROXY
    value: ".cluster.local,.test.example.com,.svc,127.0.0.1,api-int.test.example.com,etcd-0.test.example.com,etcd-1.test.example.com,etcd-2.test.example.com,fe00:1201::/64,fe01::/48,fe02::/112,fe2e:6f44:51d8::/64,localhost"
  selector:
    matchLabels:
      hive.openshift.io/job-type: provision
---
apiVersion: redhatcop.redhat.io/v1alpha1
kind: PodPreset
metadata:
  name: hive-job-deprovision
spec:
  env:
  - name: HTTP_PROXY
    value: "http://[fe2e:6f44:51d8::134]:3128"
  - name: HTTPS_PROXY
    value: "http://[fe2e:6f44:51d8::134]:3128"
  - name: NO_PROXY
    value: ".cluster.local,.test.example.com,.svc,127.0.0.1,api-int.test.example.com,etcd-0.test.example.com,etcd-1.test.example.com,etcd-2.test.example.com,fe00:1201::/64,fe01::/48,fe02::/112,fe2e:6f44:51d8::/64,localhost"
  selector:
    matchLabels:
      hive.openshift.io/job-type: deprovision

This configuration will add additional environment variables (HTTP_PROXY, HTTPS_PROXY, NO_PROXY) to the pods created by the RHACM.  This will allow the pods to gain access to the Internet and communicate with cloud provider API and create the desired ressources.

Comment 15 david.gabrysch 2021-03-26 07:50:33 UTC
I would really like to use fisantos solution, but the mentioned cert-manager operator does not know this type of CR (1.1.0 provided by Jetstack is what I get from OperatorHub)

So far we are e.g. able to get the provisioned clusters' namespace, but the install-pod gets this problem in the logs:
time="2021-03-25T12:33:48Z" level=info msg="Obtaining RHCOS image file from 'https://releases-art-rhcos.svc.ci.openshift.org/art/storage/releases/rhcos-4.7/47.83.202102090044-0/x86_64/rhcos-47.83.202102090044-0-vmware.x86_64.ova?sha256=13d92692b8eed717ff8d0d113a24add339a65ef1f12eceeb99dabcd922cc86d1'" time="2021-03-25T12:33:48Z" level=fatal msg="failed to fetch Terraform Variables: failed to generate asset \"Terraform Variables\": failed to get vsphere Terraform variables: failed to use cached vsphere image: Get \"https://releases-art-rhcos.svc.ci.openshift.org/art/storage/releases/rhcos-4.7/47.83.202102090044-0/x86_64/rhcos-47.83.202102090044-0-vmware.x86_64.ova?sha256=13d92692b8eed717ff8d0d113a24add339a65ef1f12eceeb99dabcd922cc86d1\": dial tcp: lookup releases-art-rhcos.svc.ci.openshift.org on xxx.xx.xx.xx:53: no such host"

I know, the reason is obvious, we cannot use our proxy here, but is there maybe some config flag to at least use a locally mirrored ova image? Or maybe change the domain where we can pull the images from (releases-art-rhcos.svc.ci.openshift.org)?

Kind regards,

David

Comment 17 Andrea Cervesato 2021-04-06 12:21:06 UTC
(In reply to david.gabrysch from comment #15)
> I would really like to use fisantos solution, but the mentioned
> cert-manager operator does not know this type of CR (1.1.0 provided by
> Jetstack is what I get from OperatorHub)
> 
> So far we are e.g. able to get the provisioned clusters' namespace, but the
> install-pod gets this problem in the logs:
> time="2021-03-25T12:33:48Z" level=info msg="Obtaining RHCOS image file from
> 'https://releases-art-rhcos.svc.ci.openshift.org/art/storage/releases/rhcos-
> 4.7/47.83.202102090044-0/x86_64/rhcos-47.83.202102090044-0-vmware.x86_64.
> ova?sha256=13d92692b8eed717ff8d0d113a24add339a65ef1f12eceeb99dabcd922cc86d1'"
> time="2021-03-25T12:33:48Z" level=fatal msg="failed to fetch Terraform
> Variables: failed to generate asset \"Terraform Variables\": failed to get
> vsphere Terraform variables: failed to use cached vsphere image: Get
> \"https://releases-art-rhcos.svc.ci.openshift.org/art/storage/releases/rhcos-
> 4.7/47.83.202102090044-0/x86_64/rhcos-47.83.202102090044-0-vmware.x86_64.
> ova?sha256=13d92692b8eed717ff8d0d113a24add339a65ef1f12eceeb99dabcd922cc86d1\"
> : dial tcp: lookup releases-art-rhcos.svc.ci.openshift.org on
> xxx.xx.xx.xx:53: no such host"
> 
> I know, the reason is obvious, we cannot use our proxy here, but is there
> maybe some config flag to at least use a locally mirrored ova image? Or
> maybe change the domain where we can pull the images from
> (releases-art-rhcos.svc.ci.openshift.org)?
> 
> Kind regards,
> 
> David

Hi David, You should deploy cert-manager before. You are prolly missing the instance.

Comment 18 david.gabrysch 2021-04-08 10:49:37 UTC
Hi Andrea,

this worked fine, thank you :)

We as this workaround worked well - I have a new issue: The bootstrap-node does not know anything about our proxy, unfortunately changing the install-config.yaml in RHACM, providing a proxy leads to a non-existent hive pod. But for this I already opened a ticket with your support :)

Comment 19 Andrea Cervesato 2021-04-08 17:49:35 UTC
(In reply to david.gabrysch from comment #18)
> Hi Andrea,
> 
> this worked fine, thank you :)
> 
> We as this workaround worked well - I have a new issue: The bootstrap-node
> does not know anything about our proxy, unfortunately changing the
> install-config.yaml in RHACM, providing a proxy leads to a non-existent hive
> pod. But for this I already opened a ticket with your support :)

Hi David!
I've just tested this and as you said via web UI the "doctoring" of the install-config doesn't work.

I've tested doing it via shell and it work like a charm.

```yaml
apiVersion: v1
kind: Namespace
metadata:
  creationTimestamp: null
  name: CLUSTER_NAME
spec: {}
status: {}
---
apiVersion: hive.openshift.io/v1
kind: ClusterDeployment
metadata:
  name: CLUSTER_NAME
  namespace: CLUSTER_NAME
  labels:
    cloud: 'vSphere'
    vendor: 'OpenShift'
spec:
  baseDomain: <REDACTED>
  clusterName: CLUSTER_NAME
  controlPlaneConfig:
    servingCertificates: {}
  installAttemptsLimit: 2
  installed: false
  platform:
    vsphere:
      cluster: OPENSHIFT
      certificatesSecretRef:
        name: CLUSTER_NAME-vsphere-certs
      credentialsSecretRef:
        name: CLUSTER_NAME-vsphere-creds
      vCenter: <REDACTED>
      datacenter: <REDACTED>
      defaultDatastore: <REDACTED>
      network: <REDACTED>
      folder: <REDACTED>
  provisioning:
    installConfigSecretRef:
      name: CLUSTER_NAME-install-config
    sshPrivateKeySecretRef:
      name: CLUSTER_NAME-ssh-private-key
    imageSetRef:
       #quay.io/openshift-release-dev/ocp-release:4.6.23-x86_64
      name: img4.6.23-x86-64-appsub
  pullSecretRef:
    name: CLUSTER_NAME-pull-secret
---
apiVersion: cluster.open-cluster-management.io/v1
kind: ManagedCluster
metadata:
  labels:
    cloud: vSphere
    name: CLUSTER_NAME
    vendor: OpenShift
  name: CLUSTER_NAME
spec:
  hubAcceptsClient: true
---
apiVersion: hive.openshift.io/v1
kind: MachinePool
metadata:
  name: CLUSTER_NAME-worker
  namespace: CLUSTER_NAME
spec:
  clusterDeploymentRef:
    name: CLUSTER_NAME
  name: worker
  platform:
    vsphere:
      coresPerSocket: 1
      cpus: 16
      memoryMB: 32768
      osDisk:
        diskSizeGB: 150
  replicas: 3
---
apiVersion: v1
kind: Secret
metadata:
  name: CLUSTER_NAME-pull-secret
  namespace: CLUSTER_NAME
stringData:
  .dockerconfigjson: '<REDACTED>'
type: kubernetes.io/dockerconfigjson
---
apiVersion: v1
kind: Secret
metadata:
  name: CLUSTER_NAME-install-config
  namespace: CLUSTER_NAME
type: Opaque
stringData:
  # Base64 encoding of install-config yaml
  install-config.yaml: |
    apiVersion: v1
    metadata:
      name: CLUSTER_NAME
    additionalTrustBundle: |
      -----BEGIN CERTIFICATE-----
      <REDACTED>
      -----END CERTIFICATE-----
    proxy:
      httpProxy: <REDACTED>
      httpsProxy: <REDACTED>
      noProxy: <REDACTED>
    baseDomain: <REDACTED>
    controlPlane:
      hyperthreading: Enabled
      name: master
      replicas: 3
      platform:
        vsphere:
          cpus:  8
          coresPerSocket:  1
          memoryMB:  16384
          osDisk:
            diskSizeGB: 120
    compute:
    - hyperthreading: Enabled
      name: worker
      replicas: 3
      platform:
        vsphere:
          cpus:  16
          coresPerSocket:  1
          memoryMB:  32768
          osDisk:
            diskSizeGB: 150
    platform:
      vsphere:
        vCenter: <REDACTED>
        username: <REDACTED>
        password: <REDACTED>
        datacenter: <REDACTED>
        defaultDatastore: <REDACTED>
        cluster: <REDACTED>
        apiVIP: <REDACTED>
        ingressVIP: <REDACTED>
        network: <REDACTED>
        folder: <REDACTED>
    pullSecret: "" # skip, hive will inject based on it's secrets
    sshKey: <REDACTED>
---
apiVersion: v1
kind: Secret
metadata:
  name: CLUSTER_NAME-ssh-private-key
  namespace: CLUSTER_NAME
stringData:
  ssh-privatekey: |
    -----BEGIN OPENSSH PRIVATE KEY-----
    <REDACTED>
    -----END OPENSSH PRIVATE KEY-----
type: Opaque
---
apiVersion: v1
kind: Secret
type: Opaque
metadata:   
  name: CLUSTER_NAME-vsphere-creds
  namespace: CLUSTER_NAME
stringData:
  username: <REDACTED>
  password: <REDACTED>
---
apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: CLUSTER_NAME-vsphere-certs
  namespace: CLUSTER_NAME
stringData:
  .cacert: |
    -----BEGIN CERTIFICATE-----
    <NEVER ENDING LIST OF CERTIFICATES>
    -----END X509 CRL-----
---
apiVersion: agent.open-cluster-management.io/v1
kind: KlusterletAddonConfig
metadata:
  name: CLUSTER_NAME
  namespace: CLUSTER_NAME
spec:
  clusterName: CLUSTER_NAME
  clusterNamespace: CLUSTER_NAME
  clusterLabels:
    cloud: vSphere
    vendor: OpenShift
  applicationManager:
    enabled: true
  policyController:
    enabled: true
  searchCollector:
    enabled: true
  certPolicyController:
    enabled: true
  iamPolicyController:
    enabled: true
  version: 2.2.0
---
apiVersion: redhatcop.redhat.io/v1alpha1
kind: PodPreset
metadata:
  name: hive-job-provision
  namespace: CLUSTER_NAME
spec:
  env:
  - name: HTTP_PROXY
    value: "<REDACTED>"
  - name: HTTPS_PROXY
    value: "<REDACTED>"
  - name: NO_PROXY
    value: "<REDACTED>"
  selector:
    matchLabels:
      hive.openshift.io/job-type: provision
---
apiVersion: redhatcop.redhat.io/v1alpha1
kind: PodPreset
metadata:
  name: hive-job-deprovision
  namespace: CLUSTER_NAME
spec:
  env:
  - name: HTTP_PROXY
    value: "<REDACTED>"
  - name: HTTPS_PROXY
    value: "<REDACTED>"
  - name: NO_PROXY
    value: "<REDACTED>"
  selector:
    matchLabels:
      hive.openshift.io/job-type: deprovision

```

Comment 20 david.gabrysch 2021-04-09 06:00:12 UTC
Hi Andrea,

ok, at least you see what I see. Are those yamls above all I need? If yes, then I can go on further with my tests without using the UI.

Again many thanks!

Kind regards,

David

Comment 21 Andrea Cervesato 2021-04-09 07:02:49 UTC
(In reply to david.gabrysch from comment #20)
> Hi Andrea,
> 
> ok, at least you see what I see. Are those yamls above all I need? If yes,
> then I can go on further with my tests without using the UI.
> 
> Again many thanks!
> 
> Kind regards,
> 
> David

It is!

Let us know the results.

Comment 24 Chuck Brant 2021-06-02 16:57:22 UTC
Any update on this issues?

Comment 33 Red Hat Bugzilla 2023-09-15 00:47:39 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days


Note You need to log in before you can comment on or make changes to this bug.