Description of problem (please be detailed as possible and provide log snippests): As part of running `test_multi_region`, we found out a failure to communicate with backingstores that don't reside on us-east-2. For example, a backingstore with a target bucket on us-west-2 is stuck in the Creating phase with this TemporaryError - "CheckExternalConnection Status=TIMEOUT Error=OperationTimeout Message=Operation timeout" The HTTP_PROXY, HTTPS_PROXY and NO_PROXY env vars are set cluster-wide, and are available inside the NooBaa pods. Version of all relevant components (if applicable): OCS 4.5.0-494.ci OCP 4.4.12 noobaa-operator mcg-operator@sha256:3cfea8d7a75aaf44f94675b3b848409e42252bdee6fb2ec5c40fdc1cd5f44615 noobaa_cor mcg-core@sha256:049ba73bee755ebb4503c4beab834f0b95f67df3376257e875a24584866eca0e noobaa_db mongodb-36-rhel7@sha256:3292de73cb0cd935cb20118bd86e9ddbe8ff8c0a8d171cf712b46a0dbd54e169 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Yes, I cannot create backingstores who have to be communicated with through the proxy. Is there any workaround available to the best of your knowledge? No Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 2 Can this issue reproducible? Yes Can this issue reproduce from the UI? Yes If this is a regression, please provide more details to justify this: It is not - this is the first time we're testing proxied clusters. Steps to Reproduce: 1. Deploy a new OCP+OCS cluster on AWS, note on which region 2. Set up a cluster wide proxy (Daniel Horák is the one who set it up in my case) 3. Try to create a backingstore that uses a target bucket that resides on a different region than your cluster Actual results: Backingstore is stuck on "Creating" Expected results: Backingstore is verified and created successfully Additional info:
@Ben Can you please attach an OCS must-gather on this BZ?
We do not collect must-gather upon setup/teardown failures at the moment, and thus we do not have one at the moment. I'll reproduce the bug, run the must-gather command manually and attach the logs once I have them.
worth providing the proxy configuration
Some support for the proxy env variables were missing in the noobaa core codebase. A PR with a fix was issued on the upstream project
Hi, We where missing a minor change in noobaa core which did not allow this BZ to be verified and resulted with the opening of BZ #871408 The issue is resolved and documented in the mentioned BZ and fixed with upstream PR https://github.com/noobaa/noobaa-core/pull/6141
@Ohad you probably meant bugzilla #1871408
@Frederic you are correct. Thank you for spotting the error.
Cluster is on us-east-2. I created two backingstores - us-east-2, and us-west-1. Created a bucketclass that uses both, and an OBC that uses the bucketclass. OBC was healthy, backingstores were ready. Verified on 4.5.0-64.ci.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenShift Container Storage 4.5.0 bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:3754