Bug 2060614 - [GSS] Noobaa errors during MTC migration [NEEDINFO]
Summary: [GSS] Noobaa errors during MTC migration
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: Multi-Cloud Object Gateway
Version: 4.8
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Nimrod Becker
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-03-03 21:10 UTC by khover
Modified: 2023-08-09 16:49 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-06-10 13:14:53 UTC
Embargoed:
nbecker: needinfo? (khover)


Attachments (Terms of Use)

Description khover 2022-03-03 21:10:08 UTC
Description of problem (please be detailed as possible and provide log
snippests):

Seeing Noobaa errors in the endpoint logs during MTC migration while uploading data to target OCP/ODF 4 cluster.

MTC will be filing a bug to work in parallel.

[32mMar-3 4:10:31.920[35m [Endpoint/11] [31m[ERROR][39m core.rpc.rpc:: RPC._request: response ERROR srv bucket_api.read_bucket_sdk_info reqid 27956@wss://noobaa-mgmt.openshift-storage.svc:443(ypera9) connid wss://noobaa-mgmt.openshift-storage.svc:443(ypera9) params { name: SENSITIVE-1e17de8373d5210e } took [0.5+0.7=1.2] [RpcError: No such bucket: ocp3-bucket-dta] { rpc_code: [32m'NO_SUCH_BUCKET'[39m }
[32mMar-3 4:10:31.920[35m [Endpoint/11] [31m[ERROR][39m core.endpoint.s3.s3_rest:: S3 ERROR <?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchBucket</Code><Message>The specified bucket does not exist.</Message><Resource>/ocp3-bucket-dta?delimiter=%2F&amp;list-type=2&amp;prefix=velero%2F</Resource><RequestId>l0agzdda-6hgqly-xm5</RequestId></Error> GET /ocp3-bucket-dta?delimiter=%2F&list-type=2&prefix=velero%2F {"user-agent":"aws-sdk-go/1.38.15 (go1.16.6; linux; amd64)","authorization":"AWS4-HMAC-SHA256 Credential=0TB01BMTARo63PucPbtb/20220303/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=7099d031dab4ea1995b2a74c1ae31edf906584ea187668180daebabe0c0902c2","x-amz-content-sha256":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855","x-amz-date":"20220303T041031Z","accept-encoding":"gzip","host":"s3-openshift-storage.apps.ocp4-dta.aholdusa.com","x-forwarded-host":"s3-openshift-storage.apps.ocp4-dta.aholdusa.com","x-forwarded-port":"443","x-forwarded-proto":"https","forwarded":"for=10.129.85.29;host=s3-openshift-storage.apps.ocp4-dta.aholdusa.com;proto=https","x-forwarded-for":"10.129.85.29"} [RpcError: No such bucket: ocp3-bucket-dta] { rpc_code: [32m'NO_SUCH_BUCKET'[39m }
 

Version of all relevant components (if applicable):

ocs-operator.v4.8.8 

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

Possibly blocking MTC migration

Is there any workaround available to the best of your knowledge?

No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?

4 

Can this issue reproducible?

In customer environment yes 

Can this issue reproduce from the UI?

yes, during staging of pv migration from 3.11 to 4.8 cluster 

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.
2.
3.


Actual results:


Expected results:


Additional info:

Comment 2 khover 2022-03-03 21:15:53 UTC

Must Gather in supportshell.

/cases/03151945/03157166-registry-redhat-io-ocs4-ocs-must-gather-rhel8-sha256-bfb5c6e78f74c584cf169e1f431d687314ab48472dddc46fe6767a836ea4bb3e.tar.gz/03157166-registry-redhat-io-ocs4-ocs-must-gather-rhel8-sha256-bfb5c6e78f74c584cf169e1f431d687314ab48472dddc46fe6767a836ea4bb3e

Comment 3 khover 2022-03-03 23:19:50 UTC
Parallel MTC BZ

https://bugzilla.redhat.com/show_bug.cgi?id=2060655

Comment 9 khover 2022-03-08 16:15:37 UTC
Per ODF eng sync call on 3/7/22

@jalbo per @dzaken was going to reach out to myself for further info needed


Note You need to log in before you can comment on or make changes to this bug.