.KafkaConnect sends objects from a Kafka topic to the RGW S3 bucket
Previously, sending objects from a Kafka topic to the RGW S3 bucket failed because the chunked-encoding object signature was not calculated correctly.
This produced the following error in the RADOS Gateway log:
`20 AWSv4ComplMulti: ERROR: chunk signature mismatch`
With this release, the chunked-encoding object signature is calculated correctly, allowing KafkaConnect to send objects successfully.
Created attachment 1744348[details]
RGW Log Trace
Description of problem:
I'm Running an AMQ Kafka cluster using Openshift 4.6, I have and external Ceph cluster that is in version RHCS4.1z3.
When I'm trying to configure Camel to Sink objects from a Kafka topic to the RGW S3 bucket (using KafkaConnect), I get the following error:
69c03411437d70f526efa7c2ad67e150ea5a547cb89f83563db8e86aaea39419
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
3bfd41f6bb2716df9398697bab6ee419b55a46c657f8ffde144316a5bfef06d8
2021-01-04 17:35:52.507 7fbb667da700 20 AWSv4ComplMulti: ERROR: chunk signature mismatch
2021-01-04 17:35:52.507 7fbb667da700 20 AWSv4ComplMulti: declared signature=0bb61a0806d8c0b9c1c8d8d64079dab9e7c495b45903b42237ab5fb210e062d0
2021-01-04 17:35:52.507 7fbb667da700 20 AWSv4ComplMulti: calculated signature=ee0e7b39b249e1bd96154eb7a3a99bde42b0143144ca62ed6669cc525f665ef9
2021-01-04 17:35:52.507 7fbb667da700 10 ERROR: signature of last chunk does not match
2021-01-04 17:35:52.507 7fbb667da700 20 req 646 0.002s s3:put_obj get_data() returned ret=-2040
2021-01-04 17:35:52.507 7fbb667da700 2 req 646 0.002s s3:put_obj completing
2021-01-04 17:35:52.507 7fbb667da700 2 req 646 0.002s s3:put_obj op status=-2040
2021-01-04 17:35:52.507 7fbb667da700 2 req 646 0.002s s3:put_obj http status=400
2021-01-04 17:35:52.507 7fbb667da700 1 ====== req done req=0x7fbca3992680 op status=-2040 http_status=400 latency=0.00199996s ======
2021-01-04 17:35:52.507 7fbb667da700 1 beast: 0x7fbca3992680: 192.168.1.72 - - [2021-01-04 17:35:52.0.507207s] "PUT /s3-bucket-a0221101-94e3-4132-818c-a7d43b2737e0/20210104-153552476-1E0D4A4324F2B02-0000000000000000 HTTP/1.1" 400 276 - "aws-sdk-java/2.15.43 Linux/4.18.0-193.29.1.el8_2.x86_64 OpenJDK_64-Bit_Server_VM/11.0.9+11-LTS Java/11.0.9 scala/2.12.10 kotlin/1.3.20-release-116 (1.3.20) vendor/Red_Hat__Inc. io/sync http/Apache" -
2021-01-04 17:35:52.558 7fbb89820700 20 failed to read header: end of stream
It seems like I get error 400 as the last chunk that is being uploaded by the client has a signature mismatch.
I've tested other S3 endpoints besides RGW and AWS, and it seems to be working there.
Version-Release number of selected component (if applicable):
RHCS4.1.z3
How reproducible:
Basically, it can be reproduced easily by running the specific RHCS and Camel versions I'm running.
Steps to Reproduce:
1. Run an RHCS4.1z3 cluster with rgw beast frontend
2. Deploy a Camel-Kafka-S3-Sink service using KafkaConnect
3. Use the SinkConnector to write data to the RGW S3 bucket
4. Verify that this error occurs once again
Actual results:
Expected results:
Expected to get HTTP 200 response, and that the objects will be written to the S3 bucket
Additional info:
I have tested other S3 endpoints (not AWS based) and it seems to be working there. Objects are being written as expected to the S3 bucket.
Comment 1Matt Benjamin (redhat)
2021-01-04 17:31:25 UTC
Hi Shon,
What -are- the Kafka and Camel versions, by chance?
Matt
Comment 2Matt Benjamin (redhat)
2021-01-04 17:41:54 UTC
(In reply to Matt Benjamin (redhat) from comment #1)
> Hi Shon,
>
> What -are- the Kafka and Camel versions, by chance?
>
> Matt
from Shon:
The Kafka version is 2.6
and the CamelAws2s3SinkConnector version is 0.7
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Important: Red Hat Ceph Storage security, bug fix, and enhancement Update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2021:1452
Created attachment 1744348 [details] RGW Log Trace Description of problem: I'm Running an AMQ Kafka cluster using Openshift 4.6, I have and external Ceph cluster that is in version RHCS4.1z3. When I'm trying to configure Camel to Sink objects from a Kafka topic to the RGW S3 bucket (using KafkaConnect), I get the following error: 69c03411437d70f526efa7c2ad67e150ea5a547cb89f83563db8e86aaea39419 e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 3bfd41f6bb2716df9398697bab6ee419b55a46c657f8ffde144316a5bfef06d8 2021-01-04 17:35:52.507 7fbb667da700 20 AWSv4ComplMulti: ERROR: chunk signature mismatch 2021-01-04 17:35:52.507 7fbb667da700 20 AWSv4ComplMulti: declared signature=0bb61a0806d8c0b9c1c8d8d64079dab9e7c495b45903b42237ab5fb210e062d0 2021-01-04 17:35:52.507 7fbb667da700 20 AWSv4ComplMulti: calculated signature=ee0e7b39b249e1bd96154eb7a3a99bde42b0143144ca62ed6669cc525f665ef9 2021-01-04 17:35:52.507 7fbb667da700 10 ERROR: signature of last chunk does not match 2021-01-04 17:35:52.507 7fbb667da700 20 req 646 0.002s s3:put_obj get_data() returned ret=-2040 2021-01-04 17:35:52.507 7fbb667da700 2 req 646 0.002s s3:put_obj completing 2021-01-04 17:35:52.507 7fbb667da700 2 req 646 0.002s s3:put_obj op status=-2040 2021-01-04 17:35:52.507 7fbb667da700 2 req 646 0.002s s3:put_obj http status=400 2021-01-04 17:35:52.507 7fbb667da700 1 ====== req done req=0x7fbca3992680 op status=-2040 http_status=400 latency=0.00199996s ====== 2021-01-04 17:35:52.507 7fbb667da700 1 beast: 0x7fbca3992680: 192.168.1.72 - - [2021-01-04 17:35:52.0.507207s] "PUT /s3-bucket-a0221101-94e3-4132-818c-a7d43b2737e0/20210104-153552476-1E0D4A4324F2B02-0000000000000000 HTTP/1.1" 400 276 - "aws-sdk-java/2.15.43 Linux/4.18.0-193.29.1.el8_2.x86_64 OpenJDK_64-Bit_Server_VM/11.0.9+11-LTS Java/11.0.9 scala/2.12.10 kotlin/1.3.20-release-116 (1.3.20) vendor/Red_Hat__Inc. io/sync http/Apache" - 2021-01-04 17:35:52.558 7fbb89820700 20 failed to read header: end of stream It seems like I get error 400 as the last chunk that is being uploaded by the client has a signature mismatch. I've tested other S3 endpoints besides RGW and AWS, and it seems to be working there. Version-Release number of selected component (if applicable): RHCS4.1.z3 How reproducible: Basically, it can be reproduced easily by running the specific RHCS and Camel versions I'm running. Steps to Reproduce: 1. Run an RHCS4.1z3 cluster with rgw beast frontend 2. Deploy a Camel-Kafka-S3-Sink service using KafkaConnect 3. Use the SinkConnector to write data to the RGW S3 bucket 4. Verify that this error occurs once again Actual results: Expected results: Expected to get HTTP 200 response, and that the objects will be written to the S3 bucket Additional info: I have tested other S3 endpoints (not AWS based) and it seems to be working there. Objects are being written as expected to the S3 bucket.