Bug 1821276

Summary: CAM does not support running in a limited disconnected environment behind a proxy
Product: OpenShift Container Platform Reporter: Erik Nelson <ernelson>
Component: Migration ToolingAssignee: Erik Nelson <ernelson>
Status: CLOSED ERRATA QA Contact: Xin jiang <xjiang>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 4.4CC: chezhang, dymurray, jmatthew, pvauter, sregidor, whu
Target Milestone: ---   
Target Release: 4.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1821282 (view as bug list) Environment:
Last Closed: 2020-05-28 11:09:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1821282    

Description Erik Nelson 2020-04-06 13:22:46 UTC
The CAM stack currently does not support running with a proxy configuration. The desired support scenario is the following:

* On a 4.x restricted only for external traffic via proxy, if a "proxy" object is configured with HTTP_PROXY, HTTPS_PROXY, and NO_PROXY, the CAM operator should be installed via OLM, and OLM should apply these environment variables to the CAM operator. The CAM operator is responsible for propagating these variables to all of the required operands (controller, velero, and restic). Additionally, anything that is spawned by the operands that also is required to utilize these proxy vars, specifically the registry that the controller spawns for migrating internal cluster images.

* On a 3.x cluster, the operator is normally manually installed by extracting the operator.yml and controller-3.yml files off of the operator image and oc creating these files. The user must manually configure the proxy on the MigrationController CR by editing the controller-3.yml file, and setting the following fields: "http_proxy", "https_proxy", and "no_proxy".

NOTE: It's extremely important that NO_PROXY for both cases is correctly configured so traffic destined for the internal cluster is *not* proxied.

Comment 1 Erik Nelson 2020-04-06 13:41:22 UTC
Upstream master PRs:
https://github.com/konveyor/mig-operator/pull/278
https://github.com/konveyor/mig-controller/pull/473
https://github.com/vmware-tanzu/velero-plugin-for-aws/pull/37 (plan is to rebase our fork on top of this once its accepted, for now we are adding to our fork below)
https://github.com/konveyor/velero-plugin-for-aws/pull/3

Comment 2 Erik Nelson 2020-04-06 13:47:49 UTC
4.1 does not have a "Proxy" CRD, users must manually configure their proxy via the same 3.x variables on the MigrationController.

Comment 6 Sergio 2020-05-08 17:22:00 UTC
Verified using CAM 1.2 stage

Migrations could be run with no problem from 4.2 proxy -> 4.3 proxy using aws bucket.

Comment 8 errata-xmlrpc 2020-05-28 11:09:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:2326