Bug 2209107

Summary: [GSS] 504 gateway timeout error on put operation
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Sonal <sarora>
Component: Multi-Cloud Object GatewayAssignee: Amit Prinz Setter <aprinzse>
Status: ASSIGNED --- QA Contact: krishnaram Karthick <kramdoss>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.11CC: aprinzse, dzaken, muagarwa, nbecker, odf-bz-bot
Target Milestone: ---Flags: sarora: needinfo? (aprinzse)
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sonal 2023-05-22 16:25:07 UTC
Description of problem (please be detailed as possible and provide log
snippests):

- All read/write bucket operations are failing. Below error is observed for put operation:

ERROR: Error parsing xml: Malformed error XML returned from remote server..  ErrorXML: <html><body><h1>504 Gateway Time-out</h1>
The server didn't respond in time.

Version of all relevant components (if applicable):

ODF 4.11.7

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

Yes, s3 is used by thanos application which is not able to do read/write operations.

Is there any workaround available to the best of your knowledge?
No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
2

Can this issue reproducible?
Yes, in customer's environment

Can this issue reproduce from the UI?
No

If this is a regression, please provide more details to justify this:
No

Steps to Reproduce:
1. Create noobaa bucket using s3 or obc
2. Upload empty file to bucket
3. List object in bucket


Actual results:
504 Gateway Time-out. The server didn't respond in time.

Expected results:
Read/write s3 operations should work.

Additional info:
In next private comment

Comment 11 Amit Prinz Setter 2023-06-18 09:55:09 UTC
No. Let's try cuncrrently.


First cancel the reindex:
SELECT query, pid FROM pg_stat_activity;

(you can try filtering with with a where, eg
SELECT query, pid FROM pg_stat_activity where query like '%REINDEX%')

Then cancel the pid:
SELECT pg_cancel_backend(<pid>);

Then reissue reindex concurrently:
REINDEX INDEX CONCURRENTLY idx_btree_datachunks_tiering_index;