Bug 1862755 - In a cluster behind a proxy, some backingstores fail to communicate with their target buckets
Summary: In a cluster behind a proxy, some backingstores fail to communicate with thei...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Container Storage
Classification: Red Hat Storage
Component: Multi-Cloud Object Gateway
Version: 4.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: OCS 4.5.0
Assignee: Ohad
QA Contact: Ben Eli
URL:
Whiteboard:
Depends On: 1871408
Blocks: 1790680
TreeView+ depends on / blocked
 
Reported: 2020-08-02 06:19 UTC by Ben Eli
Modified: 2023-12-15 18:40 UTC (History)
9 users (show)

Fixed In Version: 4.5.0-49.ci
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-15 10:18:25 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github noobaa noobaa-core pull 6124 0 None closed Backport to 5.5: Fix a mismatch with the agent type for https_proxy_agent 2020-10-29 14:08:21 UTC
Github noobaa noobaa-core pull 6141 0 None closed Add proper agent selection to block_store_s3 backed by aws endpoint 2020-10-29 14:08:21 UTC
Red Hat Product Errata RHBA-2020:3754 0 None None None 2020-09-15 10:18:59 UTC

Description Ben Eli 2020-08-02 06:19:30 UTC
Description of problem (please be detailed as possible and provide log
snippests):
As part of running `test_multi_region`, we found out a failure to communicate with backingstores that don't reside on us-east-2.

For example, a backingstore with a target bucket on us-west-2 is stuck in the Creating phase with this TemporaryError - 
"CheckExternalConnection Status=TIMEOUT Error=OperationTimeout Message=Operation timeout"

The HTTP_PROXY, HTTPS_PROXY and NO_PROXY env vars are set cluster-wide, and are available inside the NooBaa pods.

Version of all relevant components (if applicable):
OCS 4.5.0-494.ci
OCP 4.4.12
noobaa-operator	mcg-operator@sha256:3cfea8d7a75aaf44f94675b3b848409e42252bdee6fb2ec5c40fdc1cd5f44615
noobaa_cor	mcg-core@sha256:049ba73bee755ebb4503c4beab834f0b95f67df3376257e875a24584866eca0e
noobaa_db	mongodb-36-rhel7@sha256:3292de73cb0cd935cb20118bd86e9ddbe8ff8c0a8d171cf712b46a0dbd54e169

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Yes, I cannot create backingstores who have to be communicated with through the proxy.

Is there any workaround available to the best of your knowledge?
No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
2

Can this issue reproducible?
Yes

Can this issue reproduce from the UI?
Yes

If this is a regression, please provide more details to justify this:
It is not - this is the first time we're testing proxied clusters.

Steps to Reproduce:
1. Deploy a new OCP+OCS cluster on AWS, note on which region
2. Set up a cluster wide proxy (Daniel Horák is the one who set it up in my case)
3. Try to create a backingstore that uses a target bucket that resides on a different region than your cluster


Actual results:
Backingstore is stuck on "Creating"

Expected results:
Backingstore is verified and created successfully

Additional info:

Comment 2 Ohad 2020-08-02 09:01:09 UTC
@Ben Can you please attach an OCS must-gather on this BZ?

Comment 3 Ben Eli 2020-08-02 11:03:21 UTC
We do not collect must-gather upon setup/teardown failures at the moment, and thus we do not have one at the moment.
I'll reproduce the bug, run the must-gather command manually and attach the logs once I have them.

Comment 4 Eran Tamir 2020-08-03 07:17:53 UTC
worth providing the  proxy configuration

Comment 8 Ohad 2020-08-04 18:16:55 UTC
Some support for the proxy env variables were missing in the noobaa core codebase.

A PR with a fix was issued on the upstream project

Comment 11 Ohad 2020-08-24 09:31:19 UTC
Hi,
We where missing a minor change in noobaa core which did not allow this BZ to be verified and resulted with the opening of BZ #871408
The issue is resolved and documented in the mentioned BZ and fixed with upstream PR https://github.com/noobaa/noobaa-core/pull/6141

Comment 12 Frederic Giloux 2020-08-25 08:19:33 UTC
@Ohad you probably meant bugzilla #1871408

Comment 13 Ohad 2020-08-25 09:13:51 UTC
@Frederic you are correct.
Thank you for spotting the error.

Comment 14 Ben Eli 2020-08-25 12:14:22 UTC
Cluster is on us-east-2.
I created two backingstores - us-east-2, and us-west-1.
Created a bucketclass that uses both, and an OBC that uses the bucketclass.
OBC was healthy, backingstores were ready.

Verified on 4.5.0-64.ci.

Comment 17 errata-xmlrpc 2020-09-15 10:18:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Container Storage 4.5.0 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:3754


Note You need to log in before you can comment on or make changes to this bug.